Jan 21 08:04:40 np0005590528 kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 21 08:04:40 np0005590528 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 21 08:04:40 np0005590528 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 08:04:40 np0005590528 kernel: BIOS-provided physical RAM map:
Jan 21 08:04:40 np0005590528 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 21 08:04:40 np0005590528 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 21 08:04:40 np0005590528 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 21 08:04:40 np0005590528 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 21 08:04:40 np0005590528 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 21 08:04:40 np0005590528 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 21 08:04:40 np0005590528 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 21 08:04:40 np0005590528 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 21 08:04:40 np0005590528 kernel: NX (Execute Disable) protection: active
Jan 21 08:04:40 np0005590528 kernel: APIC: Static calls initialized
Jan 21 08:04:40 np0005590528 kernel: SMBIOS 2.8 present.
Jan 21 08:04:40 np0005590528 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 21 08:04:40 np0005590528 kernel: Hypervisor detected: KVM
Jan 21 08:04:40 np0005590528 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 21 08:04:40 np0005590528 kernel: kvm-clock: using sched offset of 3402970076 cycles
Jan 21 08:04:40 np0005590528 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 21 08:04:40 np0005590528 kernel: tsc: Detected 2799.998 MHz processor
Jan 21 08:04:40 np0005590528 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 21 08:04:40 np0005590528 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 21 08:04:40 np0005590528 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 21 08:04:40 np0005590528 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 21 08:04:40 np0005590528 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 21 08:04:40 np0005590528 kernel: Using GB pages for direct mapping
Jan 21 08:04:40 np0005590528 kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 21 08:04:40 np0005590528 kernel: ACPI: Early table checksum verification disabled
Jan 21 08:04:40 np0005590528 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 21 08:04:40 np0005590528 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 08:04:40 np0005590528 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 08:04:40 np0005590528 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 08:04:40 np0005590528 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 21 08:04:40 np0005590528 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 08:04:40 np0005590528 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 08:04:40 np0005590528 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 21 08:04:40 np0005590528 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 21 08:04:40 np0005590528 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 21 08:04:40 np0005590528 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 21 08:04:40 np0005590528 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 21 08:04:40 np0005590528 kernel: No NUMA configuration found
Jan 21 08:04:40 np0005590528 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 21 08:04:40 np0005590528 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 21 08:04:40 np0005590528 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 21 08:04:40 np0005590528 kernel: Zone ranges:
Jan 21 08:04:40 np0005590528 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 21 08:04:40 np0005590528 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 21 08:04:40 np0005590528 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 21 08:04:40 np0005590528 kernel:  Device   empty
Jan 21 08:04:40 np0005590528 kernel: Movable zone start for each node
Jan 21 08:04:40 np0005590528 kernel: Early memory node ranges
Jan 21 08:04:40 np0005590528 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 21 08:04:40 np0005590528 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 21 08:04:40 np0005590528 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 21 08:04:40 np0005590528 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 21 08:04:40 np0005590528 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 21 08:04:40 np0005590528 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 21 08:04:40 np0005590528 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 21 08:04:40 np0005590528 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 21 08:04:40 np0005590528 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 21 08:04:40 np0005590528 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 21 08:04:40 np0005590528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 21 08:04:40 np0005590528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 21 08:04:40 np0005590528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 21 08:04:40 np0005590528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 21 08:04:40 np0005590528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 21 08:04:40 np0005590528 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 21 08:04:40 np0005590528 kernel: TSC deadline timer available
Jan 21 08:04:40 np0005590528 kernel: CPU topo: Max. logical packages:   8
Jan 21 08:04:40 np0005590528 kernel: CPU topo: Max. logical dies:       8
Jan 21 08:04:40 np0005590528 kernel: CPU topo: Max. dies per package:   1
Jan 21 08:04:40 np0005590528 kernel: CPU topo: Max. threads per core:   1
Jan 21 08:04:40 np0005590528 kernel: CPU topo: Num. cores per package:     1
Jan 21 08:04:40 np0005590528 kernel: CPU topo: Num. threads per package:   1
Jan 21 08:04:40 np0005590528 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 21 08:04:40 np0005590528 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 21 08:04:40 np0005590528 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 21 08:04:40 np0005590528 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 21 08:04:40 np0005590528 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 21 08:04:40 np0005590528 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 21 08:04:40 np0005590528 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 21 08:04:40 np0005590528 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 21 08:04:40 np0005590528 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 21 08:04:40 np0005590528 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 21 08:04:40 np0005590528 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 21 08:04:40 np0005590528 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 21 08:04:40 np0005590528 kernel: Booting paravirtualized kernel on KVM
Jan 21 08:04:40 np0005590528 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 21 08:04:40 np0005590528 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 21 08:04:40 np0005590528 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 21 08:04:40 np0005590528 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 21 08:04:40 np0005590528 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 08:04:40 np0005590528 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 21 08:04:40 np0005590528 kernel: random: crng init done
Jan 21 08:04:40 np0005590528 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: Fallback order for Node 0: 0 
Jan 21 08:04:40 np0005590528 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 21 08:04:40 np0005590528 kernel: Policy zone: Normal
Jan 21 08:04:40 np0005590528 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 21 08:04:40 np0005590528 kernel: software IO TLB: area num 8.
Jan 21 08:04:40 np0005590528 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 21 08:04:40 np0005590528 kernel: ftrace: allocating 49417 entries in 194 pages
Jan 21 08:04:40 np0005590528 kernel: ftrace: allocated 194 pages with 3 groups
Jan 21 08:04:40 np0005590528 kernel: Dynamic Preempt: voluntary
Jan 21 08:04:40 np0005590528 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 21 08:04:40 np0005590528 kernel: rcu: #011RCU event tracing is enabled.
Jan 21 08:04:40 np0005590528 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 21 08:04:40 np0005590528 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 21 08:04:40 np0005590528 kernel: #011Rude variant of Tasks RCU enabled.
Jan 21 08:04:40 np0005590528 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 21 08:04:40 np0005590528 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 21 08:04:40 np0005590528 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 21 08:04:40 np0005590528 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 08:04:40 np0005590528 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 08:04:40 np0005590528 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 08:04:40 np0005590528 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 21 08:04:40 np0005590528 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 21 08:04:40 np0005590528 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 21 08:04:40 np0005590528 kernel: Console: colour VGA+ 80x25
Jan 21 08:04:40 np0005590528 kernel: printk: console [ttyS0] enabled
Jan 21 08:04:40 np0005590528 kernel: ACPI: Core revision 20230331
Jan 21 08:04:40 np0005590528 kernel: APIC: Switch to symmetric I/O mode setup
Jan 21 08:04:40 np0005590528 kernel: x2apic enabled
Jan 21 08:04:40 np0005590528 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 21 08:04:40 np0005590528 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 21 08:04:40 np0005590528 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 21 08:04:40 np0005590528 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 21 08:04:40 np0005590528 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 21 08:04:40 np0005590528 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 21 08:04:40 np0005590528 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 21 08:04:40 np0005590528 kernel: Spectre V2 : Mitigation: Retpolines
Jan 21 08:04:40 np0005590528 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 21 08:04:40 np0005590528 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 21 08:04:40 np0005590528 kernel: RETBleed: Mitigation: untrained return thunk
Jan 21 08:04:40 np0005590528 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 21 08:04:40 np0005590528 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 21 08:04:40 np0005590528 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 21 08:04:40 np0005590528 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 21 08:04:40 np0005590528 kernel: x86/bugs: return thunk changed
Jan 21 08:04:40 np0005590528 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 21 08:04:40 np0005590528 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 21 08:04:40 np0005590528 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 21 08:04:40 np0005590528 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 21 08:04:40 np0005590528 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 21 08:04:40 np0005590528 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 21 08:04:40 np0005590528 kernel: Freeing SMP alternatives memory: 40K
Jan 21 08:04:40 np0005590528 kernel: pid_max: default: 32768 minimum: 301
Jan 21 08:04:40 np0005590528 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 21 08:04:40 np0005590528 kernel: landlock: Up and running.
Jan 21 08:04:40 np0005590528 kernel: Yama: becoming mindful.
Jan 21 08:04:40 np0005590528 kernel: SELinux:  Initializing.
Jan 21 08:04:40 np0005590528 kernel: LSM support for eBPF active
Jan 21 08:04:40 np0005590528 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 21 08:04:40 np0005590528 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 21 08:04:40 np0005590528 kernel: ... version:                0
Jan 21 08:04:40 np0005590528 kernel: ... bit width:              48
Jan 21 08:04:40 np0005590528 kernel: ... generic registers:      6
Jan 21 08:04:40 np0005590528 kernel: ... value mask:             0000ffffffffffff
Jan 21 08:04:40 np0005590528 kernel: ... max period:             00007fffffffffff
Jan 21 08:04:40 np0005590528 kernel: ... fixed-purpose events:   0
Jan 21 08:04:40 np0005590528 kernel: ... event mask:             000000000000003f
Jan 21 08:04:40 np0005590528 kernel: signal: max sigframe size: 1776
Jan 21 08:04:40 np0005590528 kernel: rcu: Hierarchical SRCU implementation.
Jan 21 08:04:40 np0005590528 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 21 08:04:40 np0005590528 kernel: smp: Bringing up secondary CPUs ...
Jan 21 08:04:40 np0005590528 kernel: smpboot: x86: Booting SMP configuration:
Jan 21 08:04:40 np0005590528 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 21 08:04:40 np0005590528 kernel: smp: Brought up 1 node, 8 CPUs
Jan 21 08:04:40 np0005590528 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 21 08:04:40 np0005590528 kernel: node 0 deferred pages initialised in 11ms
Jan 21 08:04:40 np0005590528 kernel: Memory: 7763824K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 21 08:04:40 np0005590528 kernel: devtmpfs: initialized
Jan 21 08:04:40 np0005590528 kernel: x86/mm: Memory block size: 128MB
Jan 21 08:04:40 np0005590528 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 21 08:04:40 np0005590528 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 21 08:04:40 np0005590528 kernel: pinctrl core: initialized pinctrl subsystem
Jan 21 08:04:40 np0005590528 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 21 08:04:40 np0005590528 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 21 08:04:40 np0005590528 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 21 08:04:40 np0005590528 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 21 08:04:40 np0005590528 kernel: audit: initializing netlink subsys (disabled)
Jan 21 08:04:40 np0005590528 kernel: audit: type=2000 audit(1769000679.029:1): state=initialized audit_enabled=0 res=1
Jan 21 08:04:40 np0005590528 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 21 08:04:40 np0005590528 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 21 08:04:40 np0005590528 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 21 08:04:40 np0005590528 kernel: cpuidle: using governor menu
Jan 21 08:04:40 np0005590528 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 21 08:04:40 np0005590528 kernel: PCI: Using configuration type 1 for base access
Jan 21 08:04:40 np0005590528 kernel: PCI: Using configuration type 1 for extended access
Jan 21 08:04:40 np0005590528 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 21 08:04:40 np0005590528 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 21 08:04:40 np0005590528 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 21 08:04:40 np0005590528 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 21 08:04:40 np0005590528 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 21 08:04:40 np0005590528 kernel: Demotion targets for Node 0: null
Jan 21 08:04:40 np0005590528 kernel: cryptd: max_cpu_qlen set to 1000
Jan 21 08:04:40 np0005590528 kernel: ACPI: Added _OSI(Module Device)
Jan 21 08:04:40 np0005590528 kernel: ACPI: Added _OSI(Processor Device)
Jan 21 08:04:40 np0005590528 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 21 08:04:40 np0005590528 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 21 08:04:40 np0005590528 kernel: ACPI: Interpreter enabled
Jan 21 08:04:40 np0005590528 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 21 08:04:40 np0005590528 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 21 08:04:40 np0005590528 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 21 08:04:40 np0005590528 kernel: PCI: Using E820 reservations for host bridge windows
Jan 21 08:04:40 np0005590528 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 21 08:04:40 np0005590528 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 21 08:04:40 np0005590528 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [3] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [4] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [5] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [6] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [7] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [8] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [9] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [10] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [11] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [12] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [13] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [14] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [15] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [16] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [17] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [18] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [19] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [20] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [21] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [22] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [23] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [24] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [25] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [26] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [27] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [28] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [29] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [30] registered
Jan 21 08:04:40 np0005590528 kernel: acpiphp: Slot [31] registered
Jan 21 08:04:40 np0005590528 kernel: PCI host bridge to bus 0000:00
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 21 08:04:40 np0005590528 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 21 08:04:40 np0005590528 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 21 08:04:40 np0005590528 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 21 08:04:40 np0005590528 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 21 08:04:40 np0005590528 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 21 08:04:40 np0005590528 kernel: iommu: Default domain type: Translated
Jan 21 08:04:40 np0005590528 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 21 08:04:40 np0005590528 kernel: SCSI subsystem initialized
Jan 21 08:04:40 np0005590528 kernel: ACPI: bus type USB registered
Jan 21 08:04:40 np0005590528 kernel: usbcore: registered new interface driver usbfs
Jan 21 08:04:40 np0005590528 kernel: usbcore: registered new interface driver hub
Jan 21 08:04:40 np0005590528 kernel: usbcore: registered new device driver usb
Jan 21 08:04:40 np0005590528 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 21 08:04:40 np0005590528 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 21 08:04:40 np0005590528 kernel: PTP clock support registered
Jan 21 08:04:40 np0005590528 kernel: EDAC MC: Ver: 3.0.0
Jan 21 08:04:40 np0005590528 kernel: NetLabel: Initializing
Jan 21 08:04:40 np0005590528 kernel: NetLabel:  domain hash size = 128
Jan 21 08:04:40 np0005590528 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 21 08:04:40 np0005590528 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 21 08:04:40 np0005590528 kernel: PCI: Using ACPI for IRQ routing
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 21 08:04:40 np0005590528 kernel: vgaarb: loaded
Jan 21 08:04:40 np0005590528 kernel: clocksource: Switched to clocksource kvm-clock
Jan 21 08:04:40 np0005590528 kernel: VFS: Disk quotas dquot_6.6.0
Jan 21 08:04:40 np0005590528 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 21 08:04:40 np0005590528 kernel: pnp: PnP ACPI init
Jan 21 08:04:40 np0005590528 kernel: pnp: PnP ACPI: found 5 devices
Jan 21 08:04:40 np0005590528 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 21 08:04:40 np0005590528 kernel: NET: Registered PF_INET protocol family
Jan 21 08:04:40 np0005590528 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 21 08:04:40 np0005590528 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 21 08:04:40 np0005590528 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 21 08:04:40 np0005590528 kernel: NET: Registered PF_XDP protocol family
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 21 08:04:40 np0005590528 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 21 08:04:40 np0005590528 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 21 08:04:40 np0005590528 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 117970 usecs
Jan 21 08:04:40 np0005590528 kernel: PCI: CLS 0 bytes, default 64
Jan 21 08:04:40 np0005590528 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 21 08:04:40 np0005590528 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 21 08:04:40 np0005590528 kernel: Trying to unpack rootfs image as initramfs...
Jan 21 08:04:40 np0005590528 kernel: ACPI: bus type thunderbolt registered
Jan 21 08:04:40 np0005590528 kernel: Initialise system trusted keyrings
Jan 21 08:04:40 np0005590528 kernel: Key type blacklist registered
Jan 21 08:04:40 np0005590528 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 21 08:04:40 np0005590528 kernel: zbud: loaded
Jan 21 08:04:40 np0005590528 kernel: integrity: Platform Keyring initialized
Jan 21 08:04:40 np0005590528 kernel: integrity: Machine keyring initialized
Jan 21 08:04:40 np0005590528 kernel: Freeing initrd memory: 87956K
Jan 21 08:04:40 np0005590528 kernel: NET: Registered PF_ALG protocol family
Jan 21 08:04:40 np0005590528 kernel: xor: automatically using best checksumming function   avx       
Jan 21 08:04:40 np0005590528 kernel: Key type asymmetric registered
Jan 21 08:04:40 np0005590528 kernel: Asymmetric key parser 'x509' registered
Jan 21 08:04:40 np0005590528 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 21 08:04:40 np0005590528 kernel: io scheduler mq-deadline registered
Jan 21 08:04:40 np0005590528 kernel: io scheduler kyber registered
Jan 21 08:04:40 np0005590528 kernel: io scheduler bfq registered
Jan 21 08:04:40 np0005590528 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 21 08:04:40 np0005590528 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 21 08:04:40 np0005590528 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 21 08:04:40 np0005590528 kernel: ACPI: button: Power Button [PWRF]
Jan 21 08:04:40 np0005590528 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 21 08:04:40 np0005590528 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 21 08:04:40 np0005590528 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 21 08:04:40 np0005590528 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 21 08:04:40 np0005590528 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 21 08:04:40 np0005590528 kernel: Non-volatile memory driver v1.3
Jan 21 08:04:40 np0005590528 kernel: rdac: device handler registered
Jan 21 08:04:40 np0005590528 kernel: hp_sw: device handler registered
Jan 21 08:04:40 np0005590528 kernel: emc: device handler registered
Jan 21 08:04:40 np0005590528 kernel: alua: device handler registered
Jan 21 08:04:40 np0005590528 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 21 08:04:40 np0005590528 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 21 08:04:40 np0005590528 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 21 08:04:40 np0005590528 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 21 08:04:40 np0005590528 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 21 08:04:40 np0005590528 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 21 08:04:40 np0005590528 kernel: usb usb1: Product: UHCI Host Controller
Jan 21 08:04:40 np0005590528 kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 21 08:04:40 np0005590528 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 21 08:04:40 np0005590528 kernel: hub 1-0:1.0: USB hub found
Jan 21 08:04:40 np0005590528 kernel: hub 1-0:1.0: 2 ports detected
Jan 21 08:04:40 np0005590528 kernel: usbcore: registered new interface driver usbserial_generic
Jan 21 08:04:40 np0005590528 kernel: usbserial: USB Serial support registered for generic
Jan 21 08:04:40 np0005590528 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 21 08:04:40 np0005590528 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 21 08:04:40 np0005590528 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 21 08:04:40 np0005590528 kernel: mousedev: PS/2 mouse device common for all mice
Jan 21 08:04:40 np0005590528 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 21 08:04:40 np0005590528 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 21 08:04:40 np0005590528 kernel: rtc_cmos 00:04: registered as rtc0
Jan 21 08:04:40 np0005590528 kernel: rtc_cmos 00:04: setting system clock to 2026-01-21T13:04:39 UTC (1769000679)
Jan 21 08:04:40 np0005590528 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 21 08:04:40 np0005590528 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 21 08:04:40 np0005590528 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 21 08:04:40 np0005590528 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 21 08:04:40 np0005590528 kernel: usbcore: registered new interface driver usbhid
Jan 21 08:04:40 np0005590528 kernel: usbhid: USB HID core driver
Jan 21 08:04:40 np0005590528 kernel: drop_monitor: Initializing network drop monitor service
Jan 21 08:04:40 np0005590528 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 21 08:04:40 np0005590528 kernel: Initializing XFRM netlink socket
Jan 21 08:04:40 np0005590528 kernel: NET: Registered PF_INET6 protocol family
Jan 21 08:04:40 np0005590528 kernel: Segment Routing with IPv6
Jan 21 08:04:40 np0005590528 kernel: NET: Registered PF_PACKET protocol family
Jan 21 08:04:40 np0005590528 kernel: mpls_gso: MPLS GSO support
Jan 21 08:04:40 np0005590528 kernel: IPI shorthand broadcast: enabled
Jan 21 08:04:40 np0005590528 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 21 08:04:40 np0005590528 kernel: AES CTR mode by8 optimization enabled
Jan 21 08:04:40 np0005590528 kernel: sched_clock: Marking stable (1648007109, 151456849)->(1952343655, -152879697)
Jan 21 08:04:40 np0005590528 kernel: registered taskstats version 1
Jan 21 08:04:40 np0005590528 kernel: Loading compiled-in X.509 certificates
Jan 21 08:04:40 np0005590528 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 21 08:04:40 np0005590528 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 21 08:04:40 np0005590528 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 21 08:04:40 np0005590528 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 21 08:04:40 np0005590528 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 21 08:04:40 np0005590528 kernel: Demotion targets for Node 0: null
Jan 21 08:04:40 np0005590528 kernel: page_owner is disabled
Jan 21 08:04:40 np0005590528 kernel: Key type .fscrypt registered
Jan 21 08:04:40 np0005590528 kernel: Key type fscrypt-provisioning registered
Jan 21 08:04:40 np0005590528 kernel: Key type big_key registered
Jan 21 08:04:40 np0005590528 kernel: Key type encrypted registered
Jan 21 08:04:40 np0005590528 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 21 08:04:40 np0005590528 kernel: Loading compiled-in module X.509 certificates
Jan 21 08:04:40 np0005590528 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 21 08:04:40 np0005590528 kernel: ima: Allocated hash algorithm: sha256
Jan 21 08:04:40 np0005590528 kernel: ima: No architecture policies found
Jan 21 08:04:40 np0005590528 kernel: evm: Initialising EVM extended attributes:
Jan 21 08:04:40 np0005590528 kernel: evm: security.selinux
Jan 21 08:04:40 np0005590528 kernel: evm: security.SMACK64 (disabled)
Jan 21 08:04:40 np0005590528 kernel: evm: security.SMACK64EXEC (disabled)
Jan 21 08:04:40 np0005590528 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 21 08:04:40 np0005590528 kernel: evm: security.SMACK64MMAP (disabled)
Jan 21 08:04:40 np0005590528 kernel: evm: security.apparmor (disabled)
Jan 21 08:04:40 np0005590528 kernel: evm: security.ima
Jan 21 08:04:40 np0005590528 kernel: evm: security.capability
Jan 21 08:04:40 np0005590528 kernel: evm: HMAC attrs: 0x1
Jan 21 08:04:40 np0005590528 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 21 08:04:40 np0005590528 kernel: Running certificate verification RSA selftest
Jan 21 08:04:40 np0005590528 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 21 08:04:40 np0005590528 kernel: Running certificate verification ECDSA selftest
Jan 21 08:04:40 np0005590528 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 21 08:04:40 np0005590528 kernel: clk: Disabling unused clocks
Jan 21 08:04:40 np0005590528 kernel: Freeing unused decrypted memory: 2028K
Jan 21 08:04:40 np0005590528 kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 21 08:04:40 np0005590528 kernel: Write protecting the kernel read-only data: 30720k
Jan 21 08:04:40 np0005590528 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 21 08:04:40 np0005590528 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 21 08:04:40 np0005590528 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 21 08:04:40 np0005590528 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 21 08:04:40 np0005590528 kernel: usb 1-1: Manufacturer: QEMU
Jan 21 08:04:40 np0005590528 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 21 08:04:40 np0005590528 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 21 08:04:40 np0005590528 kernel: Run /init as init process
Jan 21 08:04:40 np0005590528 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 21 08:04:40 np0005590528 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 21 08:04:40 np0005590528 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 21 08:04:40 np0005590528 systemd: Detected virtualization kvm.
Jan 21 08:04:40 np0005590528 systemd: Detected architecture x86-64.
Jan 21 08:04:40 np0005590528 systemd: Running in initrd.
Jan 21 08:04:40 np0005590528 systemd: No hostname configured, using default hostname.
Jan 21 08:04:40 np0005590528 systemd: Hostname set to <localhost>.
Jan 21 08:04:40 np0005590528 systemd: Initializing machine ID from VM UUID.
Jan 21 08:04:40 np0005590528 systemd: Queued start job for default target Initrd Default Target.
Jan 21 08:04:40 np0005590528 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 21 08:04:40 np0005590528 systemd: Reached target Local Encrypted Volumes.
Jan 21 08:04:40 np0005590528 systemd: Reached target Initrd /usr File System.
Jan 21 08:04:40 np0005590528 systemd: Reached target Local File Systems.
Jan 21 08:04:40 np0005590528 systemd: Reached target Path Units.
Jan 21 08:04:40 np0005590528 systemd: Reached target Slice Units.
Jan 21 08:04:40 np0005590528 systemd: Reached target Swaps.
Jan 21 08:04:40 np0005590528 systemd: Reached target Timer Units.
Jan 21 08:04:40 np0005590528 systemd: Listening on D-Bus System Message Bus Socket.
Jan 21 08:04:40 np0005590528 systemd: Listening on Journal Socket (/dev/log).
Jan 21 08:04:40 np0005590528 systemd: Listening on Journal Socket.
Jan 21 08:04:40 np0005590528 systemd: Listening on udev Control Socket.
Jan 21 08:04:40 np0005590528 systemd: Listening on udev Kernel Socket.
Jan 21 08:04:40 np0005590528 systemd: Reached target Socket Units.
Jan 21 08:04:40 np0005590528 systemd: Starting Create List of Static Device Nodes...
Jan 21 08:04:40 np0005590528 systemd: Starting Journal Service...
Jan 21 08:04:40 np0005590528 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 21 08:04:40 np0005590528 systemd: Starting Apply Kernel Variables...
Jan 21 08:04:40 np0005590528 systemd: Starting Create System Users...
Jan 21 08:04:40 np0005590528 systemd: Starting Setup Virtual Console...
Jan 21 08:04:40 np0005590528 systemd: Finished Create List of Static Device Nodes.
Jan 21 08:04:40 np0005590528 systemd: Finished Apply Kernel Variables.
Jan 21 08:04:40 np0005590528 systemd: Finished Create System Users.
Jan 21 08:04:40 np0005590528 systemd-journald[309]: Journal started
Jan 21 08:04:40 np0005590528 systemd-journald[309]: Runtime Journal (/run/log/journal/7823760d016641228fb23165351e57e7) is 8.0M, max 153.6M, 145.6M free.
Jan 21 08:04:40 np0005590528 systemd-sysusers[313]: Creating group 'users' with GID 100.
Jan 21 08:04:40 np0005590528 systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Jan 21 08:04:40 np0005590528 systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 21 08:04:40 np0005590528 systemd: Started Journal Service.
Jan 21 08:04:40 np0005590528 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 21 08:04:40 np0005590528 systemd[1]: Starting Create Volatile Files and Directories...
Jan 21 08:04:40 np0005590528 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 21 08:04:40 np0005590528 systemd[1]: Finished Create Volatile Files and Directories.
Jan 21 08:04:40 np0005590528 systemd[1]: Finished Setup Virtual Console.
Jan 21 08:04:40 np0005590528 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 21 08:04:40 np0005590528 systemd[1]: Starting dracut cmdline hook...
Jan 21 08:04:40 np0005590528 dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Jan 21 08:04:40 np0005590528 dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 08:04:40 np0005590528 systemd[1]: Finished dracut cmdline hook.
Jan 21 08:04:40 np0005590528 systemd[1]: Starting dracut pre-udev hook...
Jan 21 08:04:40 np0005590528 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 21 08:04:40 np0005590528 kernel: device-mapper: uevent: version 1.0.3
Jan 21 08:04:40 np0005590528 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 21 08:04:40 np0005590528 kernel: RPC: Registered named UNIX socket transport module.
Jan 21 08:04:40 np0005590528 kernel: RPC: Registered udp transport module.
Jan 21 08:04:40 np0005590528 kernel: RPC: Registered tcp transport module.
Jan 21 08:04:40 np0005590528 kernel: RPC: Registered tcp-with-tls transport module.
Jan 21 08:04:40 np0005590528 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 21 08:04:40 np0005590528 rpc.statd[445]: Version 2.5.4 starting
Jan 21 08:04:40 np0005590528 rpc.statd[445]: Initializing NSM state
Jan 21 08:04:40 np0005590528 rpc.idmapd[450]: Setting log level to 0
Jan 21 08:04:40 np0005590528 systemd[1]: Finished dracut pre-udev hook.
Jan 21 08:04:40 np0005590528 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 21 08:04:40 np0005590528 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Jan 21 08:04:40 np0005590528 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 21 08:04:40 np0005590528 systemd[1]: Starting dracut pre-trigger hook...
Jan 21 08:04:40 np0005590528 systemd[1]: Finished dracut pre-trigger hook.
Jan 21 08:04:40 np0005590528 systemd[1]: Starting Coldplug All udev Devices...
Jan 21 08:04:40 np0005590528 systemd[1]: Created slice Slice /system/modprobe.
Jan 21 08:04:41 np0005590528 systemd[1]: Starting Load Kernel Module configfs...
Jan 21 08:04:41 np0005590528 systemd[1]: Finished Coldplug All udev Devices.
Jan 21 08:04:41 np0005590528 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 08:04:41 np0005590528 systemd[1]: Finished Load Kernel Module configfs.
Jan 21 08:04:41 np0005590528 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 21 08:04:41 np0005590528 systemd[1]: Reached target Network.
Jan 21 08:04:41 np0005590528 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 21 08:04:41 np0005590528 systemd[1]: Starting dracut initqueue hook...
Jan 21 08:04:41 np0005590528 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 21 08:04:41 np0005590528 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 21 08:04:41 np0005590528 kernel: vda: vda1
Jan 21 08:04:41 np0005590528 kernel: scsi host0: ata_piix
Jan 21 08:04:41 np0005590528 kernel: scsi host1: ata_piix
Jan 21 08:04:41 np0005590528 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 21 08:04:41 np0005590528 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 21 08:04:41 np0005590528 systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 21 08:04:41 np0005590528 systemd[1]: Reached target Initrd Root Device.
Jan 21 08:04:41 np0005590528 systemd[1]: Mounting Kernel Configuration File System...
Jan 21 08:04:41 np0005590528 systemd[1]: Mounted Kernel Configuration File System.
Jan 21 08:04:41 np0005590528 systemd[1]: Reached target System Initialization.
Jan 21 08:04:41 np0005590528 systemd[1]: Reached target Basic System.
Jan 21 08:04:41 np0005590528 kernel: ata1: found unknown device (class 0)
Jan 21 08:04:41 np0005590528 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 21 08:04:41 np0005590528 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 21 08:04:41 np0005590528 systemd-udevd[495]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 08:04:41 np0005590528 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 21 08:04:41 np0005590528 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 21 08:04:41 np0005590528 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 21 08:04:41 np0005590528 systemd[1]: Finished dracut initqueue hook.
Jan 21 08:04:41 np0005590528 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 21 08:04:41 np0005590528 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 21 08:04:41 np0005590528 systemd[1]: Reached target Remote File Systems.
Jan 21 08:04:41 np0005590528 systemd[1]: Starting dracut pre-mount hook...
Jan 21 08:04:41 np0005590528 systemd[1]: Finished dracut pre-mount hook.
Jan 21 08:04:41 np0005590528 systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 21 08:04:41 np0005590528 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Jan 21 08:04:41 np0005590528 systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 21 08:04:41 np0005590528 systemd[1]: Mounting /sysroot...
Jan 21 08:04:41 np0005590528 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 21 08:04:41 np0005590528 kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 21 08:04:42 np0005590528 kernel: XFS (vda1): Ending clean mount
Jan 21 08:04:42 np0005590528 systemd[1]: Mounted /sysroot.
Jan 21 08:04:42 np0005590528 systemd[1]: Reached target Initrd Root File System.
Jan 21 08:04:42 np0005590528 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 21 08:04:42 np0005590528 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 21 08:04:42 np0005590528 systemd[1]: Reached target Initrd File Systems.
Jan 21 08:04:42 np0005590528 systemd[1]: Reached target Initrd Default Target.
Jan 21 08:04:42 np0005590528 systemd[1]: Starting dracut mount hook...
Jan 21 08:04:42 np0005590528 systemd[1]: Finished dracut mount hook.
Jan 21 08:04:42 np0005590528 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 21 08:04:42 np0005590528 rpc.idmapd[450]: exiting on signal 15
Jan 21 08:04:42 np0005590528 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 21 08:04:42 np0005590528 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Network.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Timer Units.
Jan 21 08:04:42 np0005590528 systemd[1]: dbus.socket: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 21 08:04:42 np0005590528 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Initrd Default Target.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Basic System.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Initrd Root Device.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Initrd /usr File System.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Path Units.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Remote File Systems.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Slice Units.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Socket Units.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target System Initialization.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Local File Systems.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Swaps.
Jan 21 08:04:42 np0005590528 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped dracut mount hook.
Jan 21 08:04:42 np0005590528 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped dracut pre-mount hook.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 21 08:04:42 np0005590528 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 21 08:04:42 np0005590528 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped dracut initqueue hook.
Jan 21 08:04:42 np0005590528 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped Apply Kernel Variables.
Jan 21 08:04:42 np0005590528 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 21 08:04:42 np0005590528 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped Coldplug All udev Devices.
Jan 21 08:04:42 np0005590528 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped dracut pre-trigger hook.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 21 08:04:42 np0005590528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped Setup Virtual Console.
Jan 21 08:04:42 np0005590528 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 21 08:04:42 np0005590528 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 21 08:04:42 np0005590528 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Closed udev Control Socket.
Jan 21 08:04:42 np0005590528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Closed udev Kernel Socket.
Jan 21 08:04:42 np0005590528 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped dracut pre-udev hook.
Jan 21 08:04:42 np0005590528 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped dracut cmdline hook.
Jan 21 08:04:42 np0005590528 systemd[1]: Starting Cleanup udev Database...
Jan 21 08:04:42 np0005590528 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 21 08:04:42 np0005590528 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 21 08:04:42 np0005590528 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Stopped Create System Users.
Jan 21 08:04:42 np0005590528 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 21 08:04:42 np0005590528 systemd[1]: Finished Cleanup udev Database.
Jan 21 08:04:42 np0005590528 systemd[1]: Reached target Switch Root.
Jan 21 08:04:42 np0005590528 systemd[1]: Starting Switch Root...
Jan 21 08:04:42 np0005590528 systemd[1]: Switching root.
Jan 21 08:04:42 np0005590528 systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Jan 21 08:04:42 np0005590528 systemd-journald[309]: Journal stopped
Jan 21 08:04:43 np0005590528 kernel: audit: type=1404 audit(1769000682.793:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 21 08:04:43 np0005590528 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 08:04:43 np0005590528 kernel: SELinux:  policy capability open_perms=1
Jan 21 08:04:43 np0005590528 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 08:04:43 np0005590528 kernel: SELinux:  policy capability always_check_network=0
Jan 21 08:04:43 np0005590528 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 08:04:43 np0005590528 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 08:04:43 np0005590528 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 08:04:43 np0005590528 kernel: audit: type=1403 audit(1769000682.915:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 21 08:04:43 np0005590528 systemd: Successfully loaded SELinux policy in 124.794ms.
Jan 21 08:04:43 np0005590528 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.935ms.
Jan 21 08:04:43 np0005590528 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 21 08:04:43 np0005590528 systemd: Detected virtualization kvm.
Jan 21 08:04:43 np0005590528 systemd: Detected architecture x86-64.
Jan 21 08:04:43 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:04:43 np0005590528 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 21 08:04:43 np0005590528 systemd: Stopped Switch Root.
Jan 21 08:04:43 np0005590528 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 21 08:04:43 np0005590528 systemd: Created slice Slice /system/getty.
Jan 21 08:04:43 np0005590528 systemd: Created slice Slice /system/serial-getty.
Jan 21 08:04:43 np0005590528 systemd: Created slice Slice /system/sshd-keygen.
Jan 21 08:04:43 np0005590528 systemd: Created slice User and Session Slice.
Jan 21 08:04:43 np0005590528 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 21 08:04:43 np0005590528 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 21 08:04:43 np0005590528 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 21 08:04:43 np0005590528 systemd: Reached target Local Encrypted Volumes.
Jan 21 08:04:43 np0005590528 systemd: Stopped target Switch Root.
Jan 21 08:04:43 np0005590528 systemd: Stopped target Initrd File Systems.
Jan 21 08:04:43 np0005590528 systemd: Stopped target Initrd Root File System.
Jan 21 08:04:43 np0005590528 systemd: Reached target Local Integrity Protected Volumes.
Jan 21 08:04:43 np0005590528 systemd: Reached target Path Units.
Jan 21 08:04:43 np0005590528 systemd: Reached target rpc_pipefs.target.
Jan 21 08:04:43 np0005590528 systemd: Reached target Slice Units.
Jan 21 08:04:43 np0005590528 systemd: Reached target Swaps.
Jan 21 08:04:43 np0005590528 systemd: Reached target Local Verity Protected Volumes.
Jan 21 08:04:43 np0005590528 systemd: Listening on RPCbind Server Activation Socket.
Jan 21 08:04:43 np0005590528 systemd: Reached target RPC Port Mapper.
Jan 21 08:04:43 np0005590528 systemd: Listening on Process Core Dump Socket.
Jan 21 08:04:43 np0005590528 systemd: Listening on initctl Compatibility Named Pipe.
Jan 21 08:04:43 np0005590528 systemd: Listening on udev Control Socket.
Jan 21 08:04:43 np0005590528 systemd: Listening on udev Kernel Socket.
Jan 21 08:04:43 np0005590528 systemd: Mounting Huge Pages File System...
Jan 21 08:04:43 np0005590528 systemd: Mounting POSIX Message Queue File System...
Jan 21 08:04:43 np0005590528 systemd: Mounting Kernel Debug File System...
Jan 21 08:04:43 np0005590528 systemd: Mounting Kernel Trace File System...
Jan 21 08:04:43 np0005590528 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 21 08:04:43 np0005590528 systemd: Starting Create List of Static Device Nodes...
Jan 21 08:04:43 np0005590528 systemd: Starting Load Kernel Module configfs...
Jan 21 08:04:43 np0005590528 systemd: Starting Load Kernel Module drm...
Jan 21 08:04:43 np0005590528 systemd: Starting Load Kernel Module efi_pstore...
Jan 21 08:04:43 np0005590528 systemd: Starting Load Kernel Module fuse...
Jan 21 08:04:43 np0005590528 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 21 08:04:43 np0005590528 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 21 08:04:43 np0005590528 systemd: Stopped File System Check on Root Device.
Jan 21 08:04:43 np0005590528 systemd: Stopped Journal Service.
Jan 21 08:04:43 np0005590528 systemd: Starting Journal Service...
Jan 21 08:04:43 np0005590528 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 21 08:04:43 np0005590528 systemd: Starting Generate network units from Kernel command line...
Jan 21 08:04:43 np0005590528 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 08:04:43 np0005590528 systemd: Starting Remount Root and Kernel File Systems...
Jan 21 08:04:43 np0005590528 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 21 08:04:43 np0005590528 systemd: Starting Apply Kernel Variables...
Jan 21 08:04:43 np0005590528 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 21 08:04:43 np0005590528 kernel: fuse: init (API version 7.37)
Jan 21 08:04:43 np0005590528 systemd: Starting Coldplug All udev Devices...
Jan 21 08:04:43 np0005590528 systemd-journald[675]: Journal started
Jan 21 08:04:43 np0005590528 systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 21 08:04:43 np0005590528 systemd[1]: Queued start job for default target Multi-User System.
Jan 21 08:04:43 np0005590528 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 21 08:04:43 np0005590528 systemd: Started Journal Service.
Jan 21 08:04:43 np0005590528 systemd[1]: Mounted Huge Pages File System.
Jan 21 08:04:43 np0005590528 systemd[1]: Mounted POSIX Message Queue File System.
Jan 21 08:04:43 np0005590528 systemd[1]: Mounted Kernel Debug File System.
Jan 21 08:04:43 np0005590528 systemd[1]: Mounted Kernel Trace File System.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Create List of Static Device Nodes.
Jan 21 08:04:43 np0005590528 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Load Kernel Module configfs.
Jan 21 08:04:43 np0005590528 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 21 08:04:43 np0005590528 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Load Kernel Module fuse.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Generate network units from Kernel command line.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Apply Kernel Variables.
Jan 21 08:04:43 np0005590528 kernel: ACPI: bus type drm_connector registered
Jan 21 08:04:43 np0005590528 systemd[1]: Mounting FUSE Control File System...
Jan 21 08:04:43 np0005590528 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 21 08:04:43 np0005590528 systemd[1]: Starting Rebuild Hardware Database...
Jan 21 08:04:43 np0005590528 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 21 08:04:43 np0005590528 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 21 08:04:43 np0005590528 systemd[1]: Starting Load/Save OS Random Seed...
Jan 21 08:04:43 np0005590528 systemd[1]: Starting Create System Users...
Jan 21 08:04:43 np0005590528 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Load Kernel Module drm.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Coldplug All udev Devices.
Jan 21 08:04:43 np0005590528 systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 21 08:04:43 np0005590528 systemd-journald[675]: Received client request to flush runtime journal.
Jan 21 08:04:43 np0005590528 systemd[1]: Mounted FUSE Control File System.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Load/Save OS Random Seed.
Jan 21 08:04:43 np0005590528 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Create System Users.
Jan 21 08:04:43 np0005590528 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 21 08:04:43 np0005590528 systemd[1]: Reached target Preparation for Local File Systems.
Jan 21 08:04:43 np0005590528 systemd[1]: Reached target Local File Systems.
Jan 21 08:04:43 np0005590528 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 21 08:04:43 np0005590528 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 21 08:04:43 np0005590528 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 21 08:04:43 np0005590528 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 21 08:04:43 np0005590528 systemd[1]: Starting Automatic Boot Loader Update...
Jan 21 08:04:43 np0005590528 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 21 08:04:43 np0005590528 systemd[1]: Starting Create Volatile Files and Directories...
Jan 21 08:04:43 np0005590528 bootctl[692]: Couldn't find EFI system partition, skipping.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Automatic Boot Loader Update.
Jan 21 08:04:43 np0005590528 systemd[1]: Finished Create Volatile Files and Directories.
Jan 21 08:04:43 np0005590528 systemd[1]: Starting Security Auditing Service...
Jan 21 08:04:43 np0005590528 systemd[1]: Starting RPC Bind...
Jan 21 08:04:43 np0005590528 systemd[1]: Starting Rebuild Journal Catalog...
Jan 21 08:04:43 np0005590528 auditd[698]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 21 08:04:43 np0005590528 auditd[698]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 21 08:04:43 np0005590528 systemd[1]: Started RPC Bind.
Jan 21 08:04:44 np0005590528 systemd[1]: Finished Rebuild Journal Catalog.
Jan 21 08:04:44 np0005590528 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 21 08:04:44 np0005590528 augenrules[703]: /sbin/augenrules: No change
Jan 21 08:04:44 np0005590528 augenrules[718]: No rules
Jan 21 08:04:44 np0005590528 augenrules[718]: enabled 1
Jan 21 08:04:44 np0005590528 augenrules[718]: failure 1
Jan 21 08:04:44 np0005590528 augenrules[718]: pid 698
Jan 21 08:04:44 np0005590528 augenrules[718]: rate_limit 0
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog_limit 8192
Jan 21 08:04:44 np0005590528 augenrules[718]: lost 0
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog 0
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog_wait_time 60000
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog_wait_time_actual 0
Jan 21 08:04:44 np0005590528 augenrules[718]: enabled 1
Jan 21 08:04:44 np0005590528 augenrules[718]: failure 1
Jan 21 08:04:44 np0005590528 augenrules[718]: pid 698
Jan 21 08:04:44 np0005590528 augenrules[718]: rate_limit 0
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog_limit 8192
Jan 21 08:04:44 np0005590528 augenrules[718]: lost 0
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog 0
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog_wait_time 60000
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog_wait_time_actual 0
Jan 21 08:04:44 np0005590528 augenrules[718]: enabled 1
Jan 21 08:04:44 np0005590528 augenrules[718]: failure 1
Jan 21 08:04:44 np0005590528 augenrules[718]: pid 698
Jan 21 08:04:44 np0005590528 augenrules[718]: rate_limit 0
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog_limit 8192
Jan 21 08:04:44 np0005590528 augenrules[718]: lost 0
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog 0
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog_wait_time 60000
Jan 21 08:04:44 np0005590528 augenrules[718]: backlog_wait_time_actual 0
Jan 21 08:04:44 np0005590528 systemd[1]: Started Security Auditing Service.
Jan 21 08:04:44 np0005590528 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 21 08:04:44 np0005590528 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 21 08:04:44 np0005590528 systemd[1]: Finished Rebuild Hardware Database.
Jan 21 08:04:44 np0005590528 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 21 08:04:44 np0005590528 systemd[1]: Starting Update is Completed...
Jan 21 08:04:44 np0005590528 systemd[1]: Finished Update is Completed.
Jan 21 08:04:44 np0005590528 systemd-udevd[726]: Using default interface naming scheme 'rhel-9.0'.
Jan 21 08:04:44 np0005590528 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 21 08:04:44 np0005590528 systemd[1]: Reached target System Initialization.
Jan 21 08:04:44 np0005590528 systemd[1]: Started dnf makecache --timer.
Jan 21 08:04:44 np0005590528 systemd[1]: Started Daily rotation of log files.
Jan 21 08:04:44 np0005590528 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 21 08:04:44 np0005590528 systemd[1]: Reached target Timer Units.
Jan 21 08:04:44 np0005590528 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 21 08:04:44 np0005590528 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 21 08:04:44 np0005590528 systemd[1]: Reached target Socket Units.
Jan 21 08:04:44 np0005590528 systemd[1]: Starting D-Bus System Message Bus...
Jan 21 08:04:44 np0005590528 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 08:04:44 np0005590528 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 21 08:04:44 np0005590528 systemd[1]: Starting Load Kernel Module configfs...
Jan 21 08:04:44 np0005590528 systemd-udevd[730]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 08:04:44 np0005590528 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 08:04:44 np0005590528 systemd[1]: Finished Load Kernel Module configfs.
Jan 21 08:04:44 np0005590528 systemd[1]: Started D-Bus System Message Bus.
Jan 21 08:04:44 np0005590528 systemd[1]: Reached target Basic System.
Jan 21 08:04:44 np0005590528 dbus-broker-lau[748]: Ready
Jan 21 08:04:44 np0005590528 systemd[1]: Starting NTP client/server...
Jan 21 08:04:44 np0005590528 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 21 08:04:44 np0005590528 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 21 08:04:44 np0005590528 systemd[1]: Starting IPv4 firewall with iptables...
Jan 21 08:04:44 np0005590528 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 21 08:04:44 np0005590528 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 21 08:04:44 np0005590528 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 21 08:04:44 np0005590528 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 21 08:04:44 np0005590528 systemd[1]: Started irqbalance daemon.
Jan 21 08:04:44 np0005590528 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 21 08:04:44 np0005590528 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 08:04:44 np0005590528 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 08:04:44 np0005590528 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 08:04:44 np0005590528 systemd[1]: Reached target sshd-keygen.target.
Jan 21 08:04:44 np0005590528 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 21 08:04:44 np0005590528 systemd[1]: Reached target User and Group Name Lookups.
Jan 21 08:04:44 np0005590528 systemd[1]: Starting User Login Management...
Jan 21 08:04:44 np0005590528 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 21 08:04:44 np0005590528 chronyd[787]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 21 08:04:44 np0005590528 chronyd[787]: Loaded 0 symmetric keys
Jan 21 08:04:44 np0005590528 chronyd[787]: Using right/UTC timezone to obtain leap second data
Jan 21 08:04:44 np0005590528 chronyd[787]: Loaded seccomp filter (level 2)
Jan 21 08:04:44 np0005590528 systemd[1]: Started NTP client/server.
Jan 21 08:04:45 np0005590528 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 21 08:04:45 np0005590528 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 21 08:04:45 np0005590528 kernel: Console: switching to colour dummy device 80x25
Jan 21 08:04:45 np0005590528 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 21 08:04:45 np0005590528 kernel: [drm] features: -context_init
Jan 21 08:04:45 np0005590528 kernel: [drm] number of scanouts: 1
Jan 21 08:04:45 np0005590528 kernel: [drm] number of cap sets: 0
Jan 21 08:04:45 np0005590528 systemd-logind[780]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 21 08:04:45 np0005590528 systemd-logind[780]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 21 08:04:45 np0005590528 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 21 08:04:45 np0005590528 systemd-logind[780]: New seat seat0.
Jan 21 08:04:45 np0005590528 systemd[1]: Started User Login Management.
Jan 21 08:04:45 np0005590528 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 21 08:04:45 np0005590528 kernel: Console: switching to colour frame buffer device 128x48
Jan 21 08:04:45 np0005590528 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 21 08:04:45 np0005590528 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 21 08:04:45 np0005590528 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 21 08:04:45 np0005590528 kernel: kvm_amd: TSC scaling supported
Jan 21 08:04:45 np0005590528 kernel: kvm_amd: Nested Virtualization enabled
Jan 21 08:04:45 np0005590528 kernel: kvm_amd: Nested Paging enabled
Jan 21 08:04:45 np0005590528 kernel: kvm_amd: LBR virtualization supported
Jan 21 08:04:45 np0005590528 iptables.init[774]: iptables: Applying firewall rules: [  OK  ]
Jan 21 08:04:45 np0005590528 systemd[1]: Finished IPv4 firewall with iptables.
Jan 21 08:04:45 np0005590528 cloud-init[836]: Cloud-init v. 24.4-8.el9 running 'init-local' at Wed, 21 Jan 2026 13:04:45 +0000. Up 7.47 seconds.
Jan 21 08:04:45 np0005590528 systemd[1]: run-cloud\x2dinit-tmp-tmpftitivn9.mount: Deactivated successfully.
Jan 21 08:04:45 np0005590528 systemd[1]: Starting Hostname Service...
Jan 21 08:04:45 np0005590528 systemd[1]: Started Hostname Service.
Jan 21 08:04:45 np0005590528 systemd-hostnamed[850]: Hostname set to <np0005590528.novalocal> (static)
Jan 21 08:04:45 np0005590528 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 21 08:04:45 np0005590528 systemd[1]: Reached target Preparation for Network.
Jan 21 08:04:45 np0005590528 systemd[1]: Starting Network Manager...
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9150] NetworkManager (version 1.54.3-2.el9) is starting... (boot:3db60b82-452d-4090-8c5d-4863fb6f0cf4)
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9154] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9221] manager[0x55dd2cf37000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9261] hostname: hostname: using hostnamed
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9261] hostname: static hostname changed from (none) to "np0005590528.novalocal"
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9264] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9380] manager[0x55dd2cf37000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9381] manager[0x55dd2cf37000]: rfkill: WWAN hardware radio set enabled
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9413] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9413] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9413] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9414] manager: Networking is enabled by state file
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9415] settings: Loaded settings plugin: keyfile (internal)
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9423] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 08:04:45 np0005590528 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9538] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9547] dhcp: init: Using DHCP client 'internal'
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9549] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9558] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9565] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9575] device (lo): Activation: starting connection 'lo' (cb2caf48-e7d3-4014-a1eb-1fea24d085c3)
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9585] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9587] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9613] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9617] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9619] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9621] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9623] device (eth0): carrier: link connected
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9626] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9632] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9639] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9643] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9644] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9646] manager: NetworkManager state is now CONNECTING
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9648] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9653] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9657] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 08:04:45 np0005590528 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 08:04:45 np0005590528 systemd[1]: Started Network Manager.
Jan 21 08:04:45 np0005590528 systemd[1]: Reached target Network.
Jan 21 08:04:45 np0005590528 systemd[1]: Starting Network Manager Wait Online...
Jan 21 08:04:45 np0005590528 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 21 08:04:45 np0005590528 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9960] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9962] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 08:04:45 np0005590528 NetworkManager[854]: <info>  [1769000685.9967] device (lo): Activation: successful, device activated.
Jan 21 08:04:46 np0005590528 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 21 08:04:46 np0005590528 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 21 08:04:46 np0005590528 systemd[1]: Reached target NFS client services.
Jan 21 08:04:46 np0005590528 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 21 08:04:46 np0005590528 systemd[1]: Reached target Remote File Systems.
Jan 21 08:04:46 np0005590528 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 08:04:47 np0005590528 NetworkManager[854]: <info>  [1769000687.5019] dhcp4 (eth0): state changed new lease, address=38.102.83.175
Jan 21 08:04:47 np0005590528 NetworkManager[854]: <info>  [1769000687.5036] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 08:04:47 np0005590528 NetworkManager[854]: <info>  [1769000687.5071] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:04:47 np0005590528 NetworkManager[854]: <info>  [1769000687.5119] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:04:47 np0005590528 NetworkManager[854]: <info>  [1769000687.5122] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:04:47 np0005590528 NetworkManager[854]: <info>  [1769000687.5129] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 08:04:47 np0005590528 NetworkManager[854]: <info>  [1769000687.5133] device (eth0): Activation: successful, device activated.
Jan 21 08:04:47 np0005590528 NetworkManager[854]: <info>  [1769000687.5141] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 08:04:47 np0005590528 NetworkManager[854]: <info>  [1769000687.5147] manager: startup complete
Jan 21 08:04:47 np0005590528 systemd[1]: Finished Network Manager Wait Online.
Jan 21 08:04:47 np0005590528 systemd[1]: Starting Cloud-init: Network Stage...
Jan 21 08:04:47 np0005590528 cloud-init[918]: Cloud-init v. 24.4-8.el9 running 'init' at Wed, 21 Jan 2026 13:04:47 +0000. Up 9.93 seconds.
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: |  eth0  | True |        38.102.83.175         | 255.255.255.0 | global | fa:16:3e:d2:55:45 |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fed2:5545/64 |       .       |  link  | fa:16:3e:d2:55:45 |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Jan 21 08:04:47 np0005590528 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 08:04:52 np0005590528 cloud-init[918]: Generating public/private rsa key pair.
Jan 21 08:04:52 np0005590528 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 21 08:04:52 np0005590528 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 21 08:04:52 np0005590528 cloud-init[918]: The key fingerprint is:
Jan 21 08:04:52 np0005590528 cloud-init[918]: SHA256:koHhN7PlwhAmz0+qHIoWsot03fjZClAcpZopG2yUXTY root@np0005590528.novalocal
Jan 21 08:04:52 np0005590528 cloud-init[918]: The key's randomart image is:
Jan 21 08:04:52 np0005590528 cloud-init[918]: +---[RSA 3072]----+
Jan 21 08:04:52 np0005590528 cloud-init[918]: |  . +E..         |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |  o*+++          |
Jan 21 08:04:52 np0005590528 cloud-init[918]: | o .*+* .        |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |o   =B O         |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |.*.=. B S        |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |++=oo oo         |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |++o. + .         |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |+..   o o        |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |o      +..       |
Jan 21 08:04:52 np0005590528 cloud-init[918]: +----[SHA256]-----+
Jan 21 08:04:52 np0005590528 cloud-init[918]: Generating public/private ecdsa key pair.
Jan 21 08:04:52 np0005590528 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 21 08:04:52 np0005590528 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 21 08:04:52 np0005590528 cloud-init[918]: The key fingerprint is:
Jan 21 08:04:52 np0005590528 cloud-init[918]: SHA256:GgxrRJWlR+WEqu57awiRBI5JlGqTwKNTw+A7D3uNG54 root@np0005590528.novalocal
Jan 21 08:04:52 np0005590528 cloud-init[918]: The key's randomart image is:
Jan 21 08:04:52 np0005590528 cloud-init[918]: +---[ECDSA 256]---+
Jan 21 08:04:52 np0005590528 cloud-init[918]: |=*. ...oooo      |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |*==.  .o.o       |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |*=ooo ... .      |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |+++. +..         |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |.=..o.o S        |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |  *.+  o         |
Jan 21 08:04:52 np0005590528 cloud-init[918]: | . B o.          |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |  o * o          |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |   Eo+..         |
Jan 21 08:04:52 np0005590528 cloud-init[918]: +----[SHA256]-----+
Jan 21 08:04:52 np0005590528 cloud-init[918]: Generating public/private ed25519 key pair.
Jan 21 08:04:52 np0005590528 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 21 08:04:52 np0005590528 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 21 08:04:52 np0005590528 cloud-init[918]: The key fingerprint is:
Jan 21 08:04:52 np0005590528 cloud-init[918]: SHA256:njsIb0PH+ekAB6UR3K8yhaujWkH5GgS/BixUqpvhO/0 root@np0005590528.novalocal
Jan 21 08:04:52 np0005590528 cloud-init[918]: The key's randomart image is:
Jan 21 08:04:52 np0005590528 cloud-init[918]: +--[ED25519 256]--+
Jan 21 08:04:52 np0005590528 cloud-init[918]: |.... .oo.        |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |oo..  .+.        |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |oo=   o. .       |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |o+ o  ... .      |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |o = . .+So       |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |.= +. =+=.       |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |o.+  = =+. .     |
Jan 21 08:04:52 np0005590528 cloud-init[918]: | o..o = .oo      |
Jan 21 08:04:52 np0005590528 cloud-init[918]: |.oo..E ..o.      |
Jan 21 08:04:52 np0005590528 cloud-init[918]: +----[SHA256]-----+
Jan 21 08:04:52 np0005590528 systemd[1]: Finished Cloud-init: Network Stage.
Jan 21 08:04:52 np0005590528 systemd[1]: Reached target Cloud-config availability.
Jan 21 08:04:52 np0005590528 systemd[1]: Reached target Network is Online.
Jan 21 08:04:52 np0005590528 systemd[1]: Starting Cloud-init: Config Stage...
Jan 21 08:04:52 np0005590528 systemd[1]: Starting Crash recovery kernel arming...
Jan 21 08:04:52 np0005590528 systemd[1]: Starting Notify NFS peers of a restart...
Jan 21 08:04:52 np0005590528 systemd[1]: Starting System Logging Service...
Jan 21 08:04:52 np0005590528 systemd[1]: Starting OpenSSH server daemon...
Jan 21 08:04:52 np0005590528 sm-notify[1001]: Version 2.5.4 starting
Jan 21 08:04:52 np0005590528 systemd[1]: Starting Permit User Sessions...
Jan 21 08:04:52 np0005590528 systemd[1]: Started Notify NFS peers of a restart.
Jan 21 08:04:52 np0005590528 systemd[1]: Started OpenSSH server daemon.
Jan 21 08:04:52 np0005590528 systemd[1]: Finished Permit User Sessions.
Jan 21 08:04:52 np0005590528 systemd[1]: Started Command Scheduler.
Jan 21 08:04:52 np0005590528 systemd[1]: Started Getty on tty1.
Jan 21 08:04:52 np0005590528 systemd[1]: Started Serial Getty on ttyS0.
Jan 21 08:04:52 np0005590528 systemd[1]: Reached target Login Prompts.
Jan 21 08:04:52 np0005590528 rsyslogd[1002]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1002" x-info="https://www.rsyslog.com"] start
Jan 21 08:04:52 np0005590528 rsyslogd[1002]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 21 08:04:52 np0005590528 systemd[1]: Started System Logging Service.
Jan 21 08:04:52 np0005590528 systemd[1]: Reached target Multi-User System.
Jan 21 08:04:52 np0005590528 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 21 08:04:52 np0005590528 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 21 08:04:52 np0005590528 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 21 08:04:52 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 08:04:52 np0005590528 kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Jan 21 08:04:52 np0005590528 kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 21 08:04:52 np0005590528 chronyd[787]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Jan 21 08:04:52 np0005590528 chronyd[787]: System clock TAI offset set to 37 seconds
Jan 21 08:04:52 np0005590528 cloud-init[1144]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Wed, 21 Jan 2026 13:04:52 +0000. Up 14.55 seconds.
Jan 21 08:04:52 np0005590528 systemd[1]: Finished Cloud-init: Config Stage.
Jan 21 08:04:52 np0005590528 systemd[1]: Starting Cloud-init: Final Stage...
Jan 21 08:04:52 np0005590528 dracut[1270]: dracut-057-102.git20250818.el9
Jan 21 08:04:52 np0005590528 cloud-init[1300]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Wed, 21 Jan 2026 13:04:52 +0000. Up 14.98 seconds.
Jan 21 08:04:52 np0005590528 dracut[1273]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 21 08:04:52 np0005590528 cloud-init[1332]: #############################################################
Jan 21 08:04:52 np0005590528 cloud-init[1335]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 21 08:04:52 np0005590528 cloud-init[1345]: 256 SHA256:GgxrRJWlR+WEqu57awiRBI5JlGqTwKNTw+A7D3uNG54 root@np0005590528.novalocal (ECDSA)
Jan 21 08:04:53 np0005590528 cloud-init[1350]: 256 SHA256:njsIb0PH+ekAB6UR3K8yhaujWkH5GgS/BixUqpvhO/0 root@np0005590528.novalocal (ED25519)
Jan 21 08:04:53 np0005590528 cloud-init[1356]: 3072 SHA256:koHhN7PlwhAmz0+qHIoWsot03fjZClAcpZopG2yUXTY root@np0005590528.novalocal (RSA)
Jan 21 08:04:53 np0005590528 cloud-init[1359]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 21 08:04:53 np0005590528 cloud-init[1361]: #############################################################
Jan 21 08:04:53 np0005590528 cloud-init[1300]: Cloud-init v. 24.4-8.el9 finished at Wed, 21 Jan 2026 13:04:53 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 15.16 seconds
Jan 21 08:04:53 np0005590528 systemd[1]: Finished Cloud-init: Final Stage.
Jan 21 08:04:53 np0005590528 systemd[1]: Reached target Cloud-init target.
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 21 08:04:53 np0005590528 dracut[1273]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: memstrack is not available
Jan 21 08:04:54 np0005590528 chronyd[787]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 21 08:04:54 np0005590528 dracut[1273]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 21 08:04:54 np0005590528 dracut[1273]: memstrack is not available
Jan 21 08:04:54 np0005590528 dracut[1273]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 21 08:04:54 np0005590528 dracut[1273]: *** Including module: systemd ***
Jan 21 08:04:54 np0005590528 irqbalance[775]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 21 08:04:54 np0005590528 irqbalance[775]: IRQ 25 affinity is now unmanaged
Jan 21 08:04:54 np0005590528 irqbalance[775]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 21 08:04:54 np0005590528 irqbalance[775]: IRQ 31 affinity is now unmanaged
Jan 21 08:04:54 np0005590528 irqbalance[775]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 21 08:04:54 np0005590528 irqbalance[775]: IRQ 28 affinity is now unmanaged
Jan 21 08:04:54 np0005590528 irqbalance[775]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 21 08:04:54 np0005590528 irqbalance[775]: IRQ 32 affinity is now unmanaged
Jan 21 08:04:54 np0005590528 irqbalance[775]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 21 08:04:54 np0005590528 irqbalance[775]: IRQ 30 affinity is now unmanaged
Jan 21 08:04:54 np0005590528 irqbalance[775]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 21 08:04:54 np0005590528 irqbalance[775]: IRQ 29 affinity is now unmanaged
Jan 21 08:04:55 np0005590528 dracut[1273]: *** Including module: fips ***
Jan 21 08:04:55 np0005590528 dracut[1273]: *** Including module: systemd-initrd ***
Jan 21 08:04:55 np0005590528 dracut[1273]: *** Including module: i18n ***
Jan 21 08:04:55 np0005590528 dracut[1273]: *** Including module: drm ***
Jan 21 08:04:56 np0005590528 dracut[1273]: *** Including module: prefixdevname ***
Jan 21 08:04:56 np0005590528 dracut[1273]: *** Including module: kernel-modules ***
Jan 21 08:04:56 np0005590528 kernel: block vda: the capability attribute has been deprecated.
Jan 21 08:04:56 np0005590528 dracut[1273]: *** Including module: kernel-modules-extra ***
Jan 21 08:04:56 np0005590528 dracut[1273]: *** Including module: qemu ***
Jan 21 08:04:56 np0005590528 dracut[1273]: *** Including module: fstab-sys ***
Jan 21 08:04:56 np0005590528 dracut[1273]: *** Including module: rootfs-block ***
Jan 21 08:04:56 np0005590528 dracut[1273]: *** Including module: terminfo ***
Jan 21 08:04:56 np0005590528 dracut[1273]: *** Including module: udev-rules ***
Jan 21 08:04:57 np0005590528 dracut[1273]: Skipping udev rule: 91-permissions.rules
Jan 21 08:04:57 np0005590528 dracut[1273]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 21 08:04:57 np0005590528 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 08:04:57 np0005590528 dracut[1273]: *** Including module: virtiofs ***
Jan 21 08:04:57 np0005590528 dracut[1273]: *** Including module: dracut-systemd ***
Jan 21 08:04:57 np0005590528 dracut[1273]: *** Including module: usrmount ***
Jan 21 08:04:57 np0005590528 dracut[1273]: *** Including module: base ***
Jan 21 08:04:58 np0005590528 dracut[1273]: *** Including module: fs-lib ***
Jan 21 08:04:58 np0005590528 dracut[1273]: *** Including module: kdumpbase ***
Jan 21 08:04:58 np0005590528 dracut[1273]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 21 08:04:58 np0005590528 dracut[1273]:  microcode_ctl module: mangling fw_dir
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: configuration "intel" is ignored
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 21 08:04:58 np0005590528 dracut[1273]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 21 08:04:59 np0005590528 dracut[1273]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 21 08:04:59 np0005590528 dracut[1273]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 21 08:04:59 np0005590528 dracut[1273]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 21 08:04:59 np0005590528 dracut[1273]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 21 08:04:59 np0005590528 dracut[1273]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 21 08:04:59 np0005590528 dracut[1273]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 21 08:04:59 np0005590528 dracut[1273]: *** Including module: openssl ***
Jan 21 08:04:59 np0005590528 dracut[1273]: *** Including module: shutdown ***
Jan 21 08:04:59 np0005590528 dracut[1273]: *** Including module: squash ***
Jan 21 08:04:59 np0005590528 dracut[1273]: *** Including modules done ***
Jan 21 08:04:59 np0005590528 dracut[1273]: *** Installing kernel module dependencies ***
Jan 21 08:05:00 np0005590528 dracut[1273]: *** Installing kernel module dependencies done ***
Jan 21 08:05:00 np0005590528 dracut[1273]: *** Resolving executable dependencies ***
Jan 21 08:05:01 np0005590528 dracut[1273]: *** Resolving executable dependencies done ***
Jan 21 08:05:01 np0005590528 dracut[1273]: *** Generating early-microcode cpio image ***
Jan 21 08:05:01 np0005590528 dracut[1273]: *** Store current command line parameters ***
Jan 21 08:05:01 np0005590528 dracut[1273]: Stored kernel commandline:
Jan 21 08:05:01 np0005590528 dracut[1273]: No dracut internal kernel commandline stored in the initramfs
Jan 21 08:05:02 np0005590528 dracut[1273]: *** Install squash loader ***
Jan 21 08:05:03 np0005590528 dracut[1273]: *** Squashing the files inside the initramfs ***
Jan 21 08:05:04 np0005590528 dracut[1273]: *** Squashing the files inside the initramfs done ***
Jan 21 08:05:04 np0005590528 dracut[1273]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 21 08:05:04 np0005590528 dracut[1273]: *** Hardlinking files ***
Jan 21 08:05:04 np0005590528 dracut[1273]: *** Hardlinking files done ***
Jan 21 08:05:04 np0005590528 dracut[1273]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 21 08:05:05 np0005590528 kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Jan 21 08:05:05 np0005590528 kdumpctl[1015]: kdump: Starting kdump: [OK]
Jan 21 08:05:05 np0005590528 systemd[1]: Finished Crash recovery kernel arming.
Jan 21 08:05:05 np0005590528 systemd[1]: Startup finished in 1.994s (kernel) + 2.897s (initrd) + 22.533s (userspace) = 27.424s.
Jan 21 08:05:10 np0005590528 systemd[1]: Created slice User Slice of UID 1000.
Jan 21 08:05:10 np0005590528 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 21 08:05:10 np0005590528 systemd-logind[780]: New session 1 of user zuul.
Jan 21 08:05:10 np0005590528 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 21 08:05:10 np0005590528 systemd[1]: Starting User Manager for UID 1000...
Jan 21 08:05:11 np0005590528 systemd[4304]: Queued start job for default target Main User Target.
Jan 21 08:05:11 np0005590528 systemd[4304]: Created slice User Application Slice.
Jan 21 08:05:11 np0005590528 systemd[4304]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 08:05:11 np0005590528 systemd[4304]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 08:05:11 np0005590528 systemd[4304]: Reached target Paths.
Jan 21 08:05:11 np0005590528 systemd[4304]: Reached target Timers.
Jan 21 08:05:11 np0005590528 systemd[4304]: Starting D-Bus User Message Bus Socket...
Jan 21 08:05:11 np0005590528 systemd[4304]: Starting Create User's Volatile Files and Directories...
Jan 21 08:05:11 np0005590528 systemd[4304]: Finished Create User's Volatile Files and Directories.
Jan 21 08:05:11 np0005590528 systemd[4304]: Listening on D-Bus User Message Bus Socket.
Jan 21 08:05:11 np0005590528 systemd[4304]: Reached target Sockets.
Jan 21 08:05:11 np0005590528 systemd[4304]: Reached target Basic System.
Jan 21 08:05:11 np0005590528 systemd[4304]: Reached target Main User Target.
Jan 21 08:05:11 np0005590528 systemd[4304]: Startup finished in 141ms.
Jan 21 08:05:11 np0005590528 systemd[1]: Started User Manager for UID 1000.
Jan 21 08:05:11 np0005590528 systemd[1]: Started Session 1 of User zuul.
Jan 21 08:05:11 np0005590528 python3[4386]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:05:15 np0005590528 python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:05:15 np0005590528 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 08:05:21 np0005590528 python3[4474]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:05:22 np0005590528 python3[4514]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 21 08:05:24 np0005590528 python3[4540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsdF4LHcgCitzYuwpx8IeWyXT6x4WtGnCWEH2AQsvDbI0qT1tLuhHEXMtGugk/Pi8705OFZ6oLOh1v5dxeB09R5GRYlKJqs3AWQBIzQ19/qlUi2IxjhttDTH2WNwx7Zy/ku8/ZiIuD0uwSaZW6C8vWfRviIyOt7SPr67C6i4Iu8NsM+frCvwveSxcQZqDzT+P5bGJ7dgR7l8OU08b5nG0LWZMocQAguPV9kvvxLG1pKi2R/9BnSzVlnicsOz5kUuOS8oJEWzZaXTq+0EaBsv/sfakOO0sdeQLIg5TKPIruwiSi4T4LwUHlQm3OErcRl46I5Nl8HOMS9bZksFo0TCG6mzTjHe5Y/BC/bLWMY9IKh+pxKKm5LP2oaxXZ9PQC1qQrCv1F5o6Fp/g/0uSamI5yMMF+aqQParEMZTL9BfbNSbszgl1m002zzgbrDKapw1xBnfUFax6bhhEW3GIxZoQFDIqnI0CjKHXb7o4BvmtBfT2hNwfajrfV2j9ZhFtT0Rk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:24 np0005590528 python3[4564]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:24 np0005590528 python3[4663]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:05:25 np0005590528 python3[4734]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769000724.6284468-207-112255060050147/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=6e3408b1e76b4c9180c1ac911338c42c_id_rsa follow=False checksum=41d9316c0b93c8992b91b1784050b7c8701818b6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:25 np0005590528 python3[4857]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:05:26 np0005590528 python3[4928]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769000725.5610757-240-203545685258382/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=6e3408b1e76b4c9180c1ac911338c42c_id_rsa.pub follow=False checksum=c49a97f0c4fc2f33fec803053643adf66dae09bd backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:27 np0005590528 python3[4976]: ansible-ping Invoked with data=pong
Jan 21 08:05:28 np0005590528 python3[5000]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:05:30 np0005590528 python3[5058]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 21 08:05:31 np0005590528 python3[5090]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:31 np0005590528 python3[5114]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:31 np0005590528 python3[5138]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:32 np0005590528 python3[5162]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:32 np0005590528 python3[5186]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:32 np0005590528 python3[5210]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:34 np0005590528 python3[5236]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:34 np0005590528 python3[5314]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:05:35 np0005590528 python3[5387]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769000734.4488103-21-76952579707930/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:36 np0005590528 python3[5435]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:36 np0005590528 python3[5459]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:36 np0005590528 python3[5483]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:36 np0005590528 python3[5507]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:37 np0005590528 python3[5531]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:37 np0005590528 python3[5555]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:37 np0005590528 python3[5579]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:38 np0005590528 python3[5603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:38 np0005590528 python3[5627]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:38 np0005590528 python3[5651]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:38 np0005590528 python3[5675]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:39 np0005590528 python3[5699]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:39 np0005590528 python3[5723]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:39 np0005590528 python3[5747]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:40 np0005590528 python3[5771]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:40 np0005590528 python3[5795]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:40 np0005590528 python3[5819]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:40 np0005590528 python3[5843]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:41 np0005590528 python3[5867]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:41 np0005590528 python3[5891]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:41 np0005590528 python3[5915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:42 np0005590528 python3[5939]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:42 np0005590528 python3[5963]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:42 np0005590528 python3[5987]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:43 np0005590528 python3[6011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:43 np0005590528 python3[6035]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:05:45 np0005590528 python3[6061]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 08:05:45 np0005590528 systemd[1]: Starting Time & Date Service...
Jan 21 08:05:45 np0005590528 systemd[1]: Started Time & Date Service.
Jan 21 08:05:45 np0005590528 systemd-timedated[6063]: Changed time zone to 'UTC' (UTC).
Jan 21 08:05:46 np0005590528 python3[6092]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:46 np0005590528 python3[6168]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:05:46 np0005590528 python3[6239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769000746.191636-153-26031791702835/source _original_basename=tmpq566vlfp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:47 np0005590528 python3[6339]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:05:47 np0005590528 python3[6410]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769000747.060639-183-28577305908758/source _original_basename=tmp8je96xv6 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:48 np0005590528 python3[6512]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:05:48 np0005590528 python3[6585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769000748.129764-231-101550851739236/source _original_basename=tmpqxekt5vr follow=False checksum=66d49d5bab4d1d03dee6fa3749e9aaa420813b05 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:49 np0005590528 python3[6633]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:05:49 np0005590528 python3[6659]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:05:50 np0005590528 python3[6739]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:05:50 np0005590528 python3[6812]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769000749.873512-273-152258448807072/source _original_basename=tmp9pfe2lww follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:05:51 np0005590528 python3[6863]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-9fb1-0c7f-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:05:51 np0005590528 python3[6891]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-9fb1-0c7f-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 21 08:05:52 np0005590528 python3[6920]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:06:10 np0005590528 python3[6946]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:06:15 np0005590528 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 08:06:47 np0005590528 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 21 08:06:47 np0005590528 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 21 08:06:47 np0005590528 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 21 08:06:47 np0005590528 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 21 08:06:47 np0005590528 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 21 08:06:47 np0005590528 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 21 08:06:47 np0005590528 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 21 08:06:47 np0005590528 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 21 08:06:47 np0005590528 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 21 08:06:47 np0005590528 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.3824] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 08:06:47 np0005590528 systemd-udevd[6950]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.4044] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.4077] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.4081] device (eth1): carrier: link connected
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.4084] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.4093] policy: auto-activating connection 'Wired connection 1' (5f608bee-bbd6-3307-abae-f2f56ef54334)
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.4098] device (eth1): Activation: starting connection 'Wired connection 1' (5f608bee-bbd6-3307-abae-f2f56ef54334)
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.4099] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.4106] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.4111] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:06:47 np0005590528 NetworkManager[854]: <info>  [1769000807.4116] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 08:06:48 np0005590528 python3[6976]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-a58f-98b0-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:06:58 np0005590528 python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:06:58 np0005590528 python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769000818.1373737-102-176631469967612/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=118e1e84e3aa55e6ebd5025d71e642a64e6c9e3d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:06:59 np0005590528 python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:06:59 np0005590528 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 21 08:06:59 np0005590528 systemd[1]: Stopped Network Manager Wait Online.
Jan 21 08:06:59 np0005590528 systemd[1]: Stopping Network Manager Wait Online...
Jan 21 08:06:59 np0005590528 NetworkManager[854]: <info>  [1769000819.5897] caught SIGTERM, shutting down normally.
Jan 21 08:06:59 np0005590528 systemd[1]: Stopping Network Manager...
Jan 21 08:06:59 np0005590528 NetworkManager[854]: <info>  [1769000819.5918] dhcp4 (eth0): canceled DHCP transaction
Jan 21 08:06:59 np0005590528 NetworkManager[854]: <info>  [1769000819.5919] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 08:06:59 np0005590528 NetworkManager[854]: <info>  [1769000819.5919] dhcp4 (eth0): state changed no lease
Jan 21 08:06:59 np0005590528 NetworkManager[854]: <info>  [1769000819.5923] manager: NetworkManager state is now CONNECTING
Jan 21 08:06:59 np0005590528 NetworkManager[854]: <info>  [1769000819.6016] dhcp4 (eth1): canceled DHCP transaction
Jan 21 08:06:59 np0005590528 NetworkManager[854]: <info>  [1769000819.6017] dhcp4 (eth1): state changed no lease
Jan 21 08:06:59 np0005590528 NetworkManager[854]: <info>  [1769000819.6100] exiting (success)
Jan 21 08:06:59 np0005590528 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 08:06:59 np0005590528 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 08:06:59 np0005590528 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 21 08:06:59 np0005590528 systemd[1]: Stopped Network Manager.
Jan 21 08:06:59 np0005590528 systemd[1]: Starting Network Manager...
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.6756] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3db60b82-452d-4090-8c5d-4863fb6f0cf4)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.6758] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.6824] manager[0x55a6518db000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 08:06:59 np0005590528 systemd[1]: Starting Hostname Service...
Jan 21 08:06:59 np0005590528 systemd[1]: Started Hostname Service.
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7854] hostname: hostname: using hostnamed
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7856] hostname: static hostname changed from (none) to "np0005590528.novalocal"
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7865] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7872] manager[0x55a6518db000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7872] manager[0x55a6518db000]: rfkill: WWAN hardware radio set enabled
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7918] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7918] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7919] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7920] manager: Networking is enabled by state file
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7924] settings: Loaded settings plugin: keyfile (internal)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7930] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7978] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.7995] dhcp: init: Using DHCP client 'internal'
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8001] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8011] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8021] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8035] device (lo): Activation: starting connection 'lo' (cb2caf48-e7d3-4014-a1eb-1fea24d085c3)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8050] device (eth0): carrier: link connected
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8059] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8071] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8071] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8088] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8103] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8118] device (eth1): carrier: link connected
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8125] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8136] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (5f608bee-bbd6-3307-abae-f2f56ef54334) (indicated)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8137] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8149] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8162] device (eth1): Activation: starting connection 'Wired connection 1' (5f608bee-bbd6-3307-abae-f2f56ef54334)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8171] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 08:06:59 np0005590528 systemd[1]: Started Network Manager.
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8189] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8201] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8206] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8209] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8213] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8218] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8221] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8226] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8249] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8253] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8261] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8264] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8279] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8282] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8287] device (lo): Activation: successful, device activated.
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8293] dhcp4 (eth0): state changed new lease, address=38.102.83.175
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8298] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 08:06:59 np0005590528 systemd[1]: Starting Network Manager Wait Online...
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8356] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8368] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8370] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8373] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8376] device (eth0): Activation: successful, device activated.
Jan 21 08:06:59 np0005590528 NetworkManager[7188]: <info>  [1769000819.8380] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 08:07:00 np0005590528 python3[7263]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-a58f-98b0-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:07:09 np0005590528 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 08:07:29 np0005590528 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9000] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 08:07:44 np0005590528 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 08:07:44 np0005590528 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9372] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9376] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9393] device (eth1): Activation: successful, device activated.
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9407] manager: startup complete
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9412] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <warn>  [1769000864.9428] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9440] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 21 08:07:44 np0005590528 systemd[1]: Finished Network Manager Wait Online.
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9567] dhcp4 (eth1): canceled DHCP transaction
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9568] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9568] dhcp4 (eth1): state changed no lease
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9588] policy: auto-activating connection 'ci-private-network' (d7910448-f944-5d05-b69e-270d04ed29fa)
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9593] device (eth1): Activation: starting connection 'ci-private-network' (d7910448-f944-5d05-b69e-270d04ed29fa)
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9594] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9597] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9612] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9621] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9663] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9664] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:07:44 np0005590528 NetworkManager[7188]: <info>  [1769000864.9670] device (eth1): Activation: successful, device activated.
Jan 21 08:07:51 np0005590528 systemd[4304]: Starting Mark boot as successful...
Jan 21 08:07:51 np0005590528 systemd[4304]: Finished Mark boot as successful.
Jan 21 08:07:55 np0005590528 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 08:08:00 np0005590528 systemd-logind[780]: Session 1 logged out. Waiting for processes to exit.
Jan 21 08:08:03 np0005590528 systemd-logind[780]: New session 3 of user zuul.
Jan 21 08:08:03 np0005590528 systemd[1]: Started Session 3 of User zuul.
Jan 21 08:08:03 np0005590528 python3[7373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:08:04 np0005590528 python3[7446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769000883.3891537-267-275332765273714/source _original_basename=tmpttxj4f79 follow=False checksum=c968927fe5bb1666c25ab44199aea18189d3ab99 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:08:06 np0005590528 systemd[1]: session-3.scope: Deactivated successfully.
Jan 21 08:08:06 np0005590528 systemd-logind[780]: Session 3 logged out. Waiting for processes to exit.
Jan 21 08:08:06 np0005590528 systemd-logind[780]: Removed session 3.
Jan 21 08:10:51 np0005590528 systemd[4304]: Created slice User Background Tasks Slice.
Jan 21 08:10:51 np0005590528 systemd[4304]: Starting Cleanup of User's Temporary Files and Directories...
Jan 21 08:10:51 np0005590528 systemd[4304]: Finished Cleanup of User's Temporary Files and Directories.
Jan 21 08:15:28 np0005590528 systemd-logind[780]: New session 4 of user zuul.
Jan 21 08:15:28 np0005590528 systemd[1]: Started Session 4 of User zuul.
Jan 21 08:15:28 np0005590528 python3[7505]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-9167-ce6c-00000000216d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:15:29 np0005590528 python3[7534]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:15:29 np0005590528 python3[7560]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:15:29 np0005590528 python3[7586]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:15:29 np0005590528 python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:15:30 np0005590528 python3[7638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:15:30 np0005590528 python3[7716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:15:31 np0005590528 python3[7789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769001330.4980493-498-80855646714557/source _original_basename=tmp1krssaov follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:15:31 np0005590528 python3[7839]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 08:15:31 np0005590528 systemd[1]: Reloading.
Jan 21 08:15:31 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:15:33 np0005590528 python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 21 08:15:34 np0005590528 python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:15:34 np0005590528 python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:15:34 np0005590528 python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:15:34 np0005590528 python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:15:35 np0005590528 python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-9167-ce6c-000000002174-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:15:35 np0005590528 python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 08:15:37 np0005590528 systemd[1]: session-4.scope: Deactivated successfully.
Jan 21 08:15:37 np0005590528 systemd[1]: session-4.scope: Consumed 4.105s CPU time.
Jan 21 08:15:37 np0005590528 systemd-logind[780]: Session 4 logged out. Waiting for processes to exit.
Jan 21 08:15:37 np0005590528 systemd-logind[780]: Removed session 4.
Jan 21 08:15:39 np0005590528 systemd-logind[780]: New session 5 of user zuul.
Jan 21 08:15:39 np0005590528 systemd[1]: Started Session 5 of User zuul.
Jan 21 08:15:39 np0005590528 python3[8099]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 08:15:44 np0005590528 irqbalance[775]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 21 08:15:44 np0005590528 irqbalance[775]: IRQ 27 affinity is now unmanaged
Jan 21 08:15:45 np0005590528 setsebool[8139]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 21 08:15:45 np0005590528 setsebool[8139]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 21 08:15:57 np0005590528 kernel: SELinux:  Converting 385 SID table entries...
Jan 21 08:15:57 np0005590528 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 08:15:57 np0005590528 kernel: SELinux:  policy capability open_perms=1
Jan 21 08:15:57 np0005590528 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 08:15:57 np0005590528 kernel: SELinux:  policy capability always_check_network=0
Jan 21 08:15:57 np0005590528 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 08:15:57 np0005590528 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 08:15:57 np0005590528 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 08:16:12 np0005590528 kernel: SELinux:  Converting 388 SID table entries...
Jan 21 08:16:13 np0005590528 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 08:16:13 np0005590528 kernel: SELinux:  policy capability open_perms=1
Jan 21 08:16:13 np0005590528 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 08:16:13 np0005590528 kernel: SELinux:  policy capability always_check_network=0
Jan 21 08:16:13 np0005590528 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 08:16:13 np0005590528 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 08:16:13 np0005590528 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 08:16:34 np0005590528 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 21 08:16:34 np0005590528 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 08:16:34 np0005590528 systemd[1]: Starting man-db-cache-update.service...
Jan 21 08:16:34 np0005590528 systemd[1]: Reloading.
Jan 21 08:16:34 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:16:34 np0005590528 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 08:16:37 np0005590528 python3[10877]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-9b47-3828-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:16:38 np0005590528 kernel: evm: overlay not supported
Jan 21 08:16:38 np0005590528 systemd[4304]: Starting D-Bus User Message Bus...
Jan 21 08:16:38 np0005590528 dbus-broker-launch[11762]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 21 08:16:38 np0005590528 dbus-broker-launch[11762]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 21 08:16:38 np0005590528 systemd[4304]: Started D-Bus User Message Bus.
Jan 21 08:16:38 np0005590528 dbus-broker-lau[11762]: Ready
Jan 21 08:16:38 np0005590528 systemd[4304]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 21 08:16:38 np0005590528 systemd[4304]: Created slice Slice /user.
Jan 21 08:16:38 np0005590528 systemd[4304]: podman-11653.scope: unit configures an IP firewall, but not running as root.
Jan 21 08:16:38 np0005590528 systemd[4304]: (This warning is only shown for the first unit using IP firewalling.)
Jan 21 08:16:38 np0005590528 systemd[4304]: Started podman-11653.scope.
Jan 21 08:16:38 np0005590528 systemd[4304]: Started podman-pause-e8ee7d97.scope.
Jan 21 08:16:41 np0005590528 python3[13922]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.83:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.83:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:16:41 np0005590528 python3[13922]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 21 08:16:42 np0005590528 systemd[1]: session-5.scope: Deactivated successfully.
Jan 21 08:16:42 np0005590528 systemd[1]: session-5.scope: Consumed 45.482s CPU time.
Jan 21 08:16:42 np0005590528 systemd-logind[780]: Session 5 logged out. Waiting for processes to exit.
Jan 21 08:16:42 np0005590528 systemd-logind[780]: Removed session 5.
Jan 21 08:17:04 np0005590528 systemd-logind[780]: New session 6 of user zuul.
Jan 21 08:17:04 np0005590528 systemd[1]: Started Session 6 of User zuul.
Jan 21 08:17:04 np0005590528 python3[21768]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNuUYMxZig0i8kJCYUhWki8ZvkWICB3zdbibeZ1b2gDaGocnHuZKkH+w5kWILwxK5bpN69Tt67l9rn12M2mvskk= zuul@np0005590527.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:17:04 np0005590528 python3[21941]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNuUYMxZig0i8kJCYUhWki8ZvkWICB3zdbibeZ1b2gDaGocnHuZKkH+w5kWILwxK5bpN69Tt67l9rn12M2mvskk= zuul@np0005590527.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:17:05 np0005590528 python3[22263]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005590528.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 21 08:17:06 np0005590528 python3[22469]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNuUYMxZig0i8kJCYUhWki8ZvkWICB3zdbibeZ1b2gDaGocnHuZKkH+w5kWILwxK5bpN69Tt67l9rn12M2mvskk= zuul@np0005590527.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 08:17:06 np0005590528 python3[22720]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:17:07 np0005590528 python3[22949]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769001426.417607-135-153270531149106/source _original_basename=tmp5hvqkc5e follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:17:07 np0005590528 python3[23247]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 21 08:17:07 np0005590528 systemd[1]: Starting Hostname Service...
Jan 21 08:17:07 np0005590528 systemd[1]: Started Hostname Service.
Jan 21 08:17:08 np0005590528 systemd-hostnamed[23342]: Changed pretty hostname to 'compute-0'
Jan 21 08:17:08 np0005590528 systemd-hostnamed[23342]: Hostname set to <compute-0> (static)
Jan 21 08:17:08 np0005590528 NetworkManager[7188]: <info>  [1769001428.0308] hostname: static hostname changed from "np0005590528.novalocal" to "compute-0"
Jan 21 08:17:08 np0005590528 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 08:17:08 np0005590528 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 08:17:08 np0005590528 systemd[1]: session-6.scope: Deactivated successfully.
Jan 21 08:17:08 np0005590528 systemd[1]: session-6.scope: Consumed 2.289s CPU time.
Jan 21 08:17:08 np0005590528 systemd-logind[780]: Session 6 logged out. Waiting for processes to exit.
Jan 21 08:17:08 np0005590528 systemd-logind[780]: Removed session 6.
Jan 21 08:17:18 np0005590528 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 08:17:29 np0005590528 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 08:17:29 np0005590528 systemd[1]: Finished man-db-cache-update.service.
Jan 21 08:17:29 np0005590528 systemd[1]: man-db-cache-update.service: Consumed 1min 4.355s CPU time.
Jan 21 08:17:29 np0005590528 systemd[1]: run-re46ce0be925a49cdab30d0fecf9b0a10.service: Deactivated successfully.
Jan 21 08:17:38 np0005590528 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 08:19:41 np0005590528 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 21 08:19:41 np0005590528 systemd[1]: Starting dnf makecache...
Jan 21 08:19:41 np0005590528 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 21 08:19:41 np0005590528 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 21 08:19:41 np0005590528 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 21 08:19:41 np0005590528 dnf[29918]: Failed determining last makecache time.
Jan 21 08:19:41 np0005590528 dnf[29918]: CentOS Stream 9 - BaseOS                         57 kB/s | 6.7 kB     00:00
Jan 21 08:19:41 np0005590528 dnf[29918]: CentOS Stream 9 - AppStream                      62 kB/s | 6.8 kB     00:00
Jan 21 08:19:42 np0005590528 dnf[29918]: CentOS Stream 9 - CRB                            51 kB/s | 6.6 kB     00:00
Jan 21 08:19:42 np0005590528 dnf[29918]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Jan 21 08:19:42 np0005590528 dnf[29918]: Metadata cache created.
Jan 21 08:19:42 np0005590528 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 21 08:19:42 np0005590528 systemd[1]: Finished dnf makecache.
Jan 21 08:21:30 np0005590528 systemd-logind[780]: New session 7 of user zuul.
Jan 21 08:21:30 np0005590528 systemd[1]: Started Session 7 of User zuul.
Jan 21 08:21:30 np0005590528 python3[30001]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:21:32 np0005590528 python3[30117]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:21:32 np0005590528 python3[30190]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:21:33 np0005590528 python3[30216]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:21:33 np0005590528 python3[30289]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:21:34 np0005590528 python3[30315]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:21:34 np0005590528 python3[30388]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:21:34 np0005590528 python3[30414]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:21:35 np0005590528 python3[30487]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:21:35 np0005590528 python3[30513]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:21:35 np0005590528 python3[30586]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:21:36 np0005590528 python3[30612]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:21:36 np0005590528 python3[30685]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:21:36 np0005590528 python3[30711]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:21:37 np0005590528 python3[30784]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:21:49 np0005590528 python3[30842]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:26:49 np0005590528 systemd[1]: session-7.scope: Deactivated successfully.
Jan 21 08:26:49 np0005590528 systemd[1]: session-7.scope: Consumed 5.524s CPU time.
Jan 21 08:26:49 np0005590528 systemd-logind[780]: Session 7 logged out. Waiting for processes to exit.
Jan 21 08:26:49 np0005590528 systemd-logind[780]: Removed session 7.
Jan 21 08:35:01 np0005590528 systemd-logind[780]: New session 8 of user zuul.
Jan 21 08:35:01 np0005590528 systemd[1]: Started Session 8 of User zuul.
Jan 21 08:35:02 np0005590528 python3.9[31003]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:35:03 np0005590528 python3.9[31184]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:35:10 np0005590528 systemd[1]: session-8.scope: Deactivated successfully.
Jan 21 08:35:10 np0005590528 systemd[1]: session-8.scope: Consumed 7.804s CPU time.
Jan 21 08:35:10 np0005590528 systemd-logind[780]: Session 8 logged out. Waiting for processes to exit.
Jan 21 08:35:10 np0005590528 systemd-logind[780]: Removed session 8.
Jan 21 08:35:26 np0005590528 systemd-logind[780]: New session 9 of user zuul.
Jan 21 08:35:26 np0005590528 systemd[1]: Started Session 9 of User zuul.
Jan 21 08:35:26 np0005590528 python3.9[31394]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 21 08:35:28 np0005590528 python3.9[31568]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:35:28 np0005590528 python3.9[31720]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:35:29 np0005590528 python3.9[31873]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:35:30 np0005590528 python3.9[32025]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:35:31 np0005590528 python3.9[32177]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:35:32 np0005590528 python3.9[32300]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002531.079989-68-191045931377035/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:35:33 np0005590528 python3.9[32452]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:35:34 np0005590528 python3.9[32608]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:35:34 np0005590528 python3.9[32760]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:35:35 np0005590528 python3.9[32910]: ansible-ansible.builtin.service_facts Invoked
Jan 21 08:35:40 np0005590528 python3.9[33163]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:35:41 np0005590528 python3.9[33313]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:35:42 np0005590528 python3.9[33467]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:35:43 np0005590528 python3.9[33625]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:35:44 np0005590528 python3.9[33709]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:36:48 np0005590528 systemd[1]: Reloading.
Jan 21 08:36:48 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:36:49 np0005590528 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 21 08:36:49 np0005590528 systemd[1]: Reloading.
Jan 21 08:36:50 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:36:50 np0005590528 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 21 08:36:50 np0005590528 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 21 08:36:50 np0005590528 systemd[1]: Reloading.
Jan 21 08:36:50 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:36:50 np0005590528 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 21 08:36:50 np0005590528 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 08:36:50 np0005590528 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 08:36:50 np0005590528 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 08:38:07 np0005590528 kernel: SELinux:  Converting 2724 SID table entries...
Jan 21 08:38:07 np0005590528 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 08:38:07 np0005590528 kernel: SELinux:  policy capability open_perms=1
Jan 21 08:38:07 np0005590528 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 08:38:07 np0005590528 kernel: SELinux:  policy capability always_check_network=0
Jan 21 08:38:07 np0005590528 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 08:38:07 np0005590528 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 08:38:07 np0005590528 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 08:38:07 np0005590528 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 21 08:38:07 np0005590528 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 08:38:07 np0005590528 systemd[1]: Starting man-db-cache-update.service...
Jan 21 08:38:07 np0005590528 systemd[1]: Reloading.
Jan 21 08:38:07 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:38:07 np0005590528 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 08:38:09 np0005590528 python3.9[35212]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:38:09 np0005590528 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 08:38:09 np0005590528 systemd[1]: Finished man-db-cache-update.service.
Jan 21 08:38:09 np0005590528 systemd[1]: man-db-cache-update.service: Consumed 1.312s CPU time.
Jan 21 08:38:09 np0005590528 systemd[1]: run-r3c9de995db8a4c159455629d668cc5c6.service: Deactivated successfully.
Jan 21 08:38:11 np0005590528 python3.9[35513]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 21 08:38:12 np0005590528 python3.9[35665]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 21 08:38:18 np0005590528 python3.9[35819]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:38:19 np0005590528 python3.9[35971]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 21 08:38:22 np0005590528 python3.9[36123]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:38:23 np0005590528 python3.9[36275]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:38:23 np0005590528 python3.9[36398]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002702.4851713-231-4128037627440/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:38:24 np0005590528 python3.9[36550]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:38:24 np0005590528 irqbalance[775]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 21 08:38:24 np0005590528 irqbalance[775]: IRQ 26 affinity is now unmanaged
Jan 21 08:38:25 np0005590528 python3.9[36702]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:38:26 np0005590528 python3.9[36855]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:38:27 np0005590528 python3.9[37007]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 21 08:38:27 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 08:38:27 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 08:38:28 np0005590528 python3.9[37161]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 08:38:29 np0005590528 python3.9[37319]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 08:38:29 np0005590528 python3.9[37479]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 21 08:38:30 np0005590528 python3.9[37632]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 08:38:31 np0005590528 python3.9[37790]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 21 08:38:32 np0005590528 python3.9[37942]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:38:36 np0005590528 python3.9[38095]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:38:37 np0005590528 python3.9[38247]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:38:37 np0005590528 python3.9[38370]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769002716.7933543-350-129654173522440/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:38:38 np0005590528 python3.9[38522]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:38:39 np0005590528 systemd[1]: Starting Load Kernel Modules...
Jan 21 08:38:39 np0005590528 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 21 08:38:39 np0005590528 kernel: Bridge firewalling registered
Jan 21 08:38:39 np0005590528 systemd-modules-load[38526]: Inserted module 'br_netfilter'
Jan 21 08:38:39 np0005590528 systemd[1]: Finished Load Kernel Modules.
Jan 21 08:38:39 np0005590528 python3.9[38682]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:38:40 np0005590528 python3.9[38805]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769002719.3101804-373-140991060332786/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:38:41 np0005590528 python3.9[38957]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:38:45 np0005590528 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 08:38:45 np0005590528 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 08:38:45 np0005590528 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 08:38:45 np0005590528 systemd[1]: Starting man-db-cache-update.service...
Jan 21 08:38:45 np0005590528 systemd[1]: Reloading.
Jan 21 08:38:46 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:38:46 np0005590528 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 08:38:47 np0005590528 python3.9[40204]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:38:48 np0005590528 python3.9[41141]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 21 08:38:48 np0005590528 python3.9[41814]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:38:49 np0005590528 python3.9[42723]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:38:49 np0005590528 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 08:38:50 np0005590528 systemd[1]: Starting Authorization Manager...
Jan 21 08:38:50 np0005590528 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 08:38:50 np0005590528 polkitd[43343]: Started polkitd version 0.117
Jan 21 08:38:50 np0005590528 systemd[1]: Started Authorization Manager.
Jan 21 08:38:50 np0005590528 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 08:38:50 np0005590528 systemd[1]: Finished man-db-cache-update.service.
Jan 21 08:38:50 np0005590528 systemd[1]: man-db-cache-update.service: Consumed 5.542s CPU time.
Jan 21 08:38:50 np0005590528 systemd[1]: run-rb06f943ba12041849ebaf1ffd88daa0d.service: Deactivated successfully.
Jan 21 08:38:51 np0005590528 python3.9[43514]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:38:51 np0005590528 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 21 08:38:51 np0005590528 systemd[1]: tuned.service: Deactivated successfully.
Jan 21 08:38:51 np0005590528 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 21 08:38:51 np0005590528 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 08:38:51 np0005590528 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 08:38:52 np0005590528 python3.9[43675]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 21 08:38:54 np0005590528 python3.9[43827]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:38:54 np0005590528 systemd[1]: Reloading.
Jan 21 08:38:54 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:38:55 np0005590528 python3.9[44016]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:38:55 np0005590528 systemd[1]: Reloading.
Jan 21 08:38:55 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:38:56 np0005590528 python3.9[44205]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:38:58 np0005590528 python3.9[44358]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:38:58 np0005590528 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 21 08:38:59 np0005590528 python3.9[44511]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:39:01 np0005590528 python3.9[44674]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:39:02 np0005590528 python3.9[44827]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:39:02 np0005590528 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 21 08:39:02 np0005590528 systemd[1]: Stopped Apply Kernel Variables.
Jan 21 08:39:02 np0005590528 systemd[1]: Stopping Apply Kernel Variables...
Jan 21 08:39:02 np0005590528 systemd[1]: Starting Apply Kernel Variables...
Jan 21 08:39:02 np0005590528 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 21 08:39:02 np0005590528 systemd[1]: Finished Apply Kernel Variables.
Jan 21 08:39:02 np0005590528 systemd[1]: session-9.scope: Deactivated successfully.
Jan 21 08:39:02 np0005590528 systemd[1]: session-9.scope: Consumed 2min 26.203s CPU time.
Jan 21 08:39:02 np0005590528 systemd-logind[780]: Session 9 logged out. Waiting for processes to exit.
Jan 21 08:39:02 np0005590528 systemd-logind[780]: Removed session 9.
Jan 21 08:39:08 np0005590528 systemd-logind[780]: New session 10 of user zuul.
Jan 21 08:39:08 np0005590528 systemd[1]: Started Session 10 of User zuul.
Jan 21 08:39:09 np0005590528 python3.9[45010]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:39:10 np0005590528 python3.9[45166]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 21 08:39:11 np0005590528 python3.9[45319]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 08:39:12 np0005590528 python3.9[45477]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 08:39:13 np0005590528 python3.9[45637]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:39:14 np0005590528 python3.9[45721]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 08:39:18 np0005590528 python3.9[45884]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:39:30 np0005590528 kernel: SELinux:  Converting 2736 SID table entries...
Jan 21 08:39:30 np0005590528 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 08:39:30 np0005590528 kernel: SELinux:  policy capability open_perms=1
Jan 21 08:39:30 np0005590528 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 08:39:30 np0005590528 kernel: SELinux:  policy capability always_check_network=0
Jan 21 08:39:30 np0005590528 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 08:39:30 np0005590528 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 08:39:30 np0005590528 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 08:39:30 np0005590528 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 21 08:39:30 np0005590528 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 21 08:39:32 np0005590528 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 08:39:32 np0005590528 systemd[1]: Starting man-db-cache-update.service...
Jan 21 08:39:32 np0005590528 systemd[1]: Reloading.
Jan 21 08:39:32 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:39:32 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:39:32 np0005590528 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 08:39:33 np0005590528 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 08:39:33 np0005590528 systemd[1]: Finished man-db-cache-update.service.
Jan 21 08:39:33 np0005590528 systemd[1]: run-r076e6202699647f4950455b338534f78.service: Deactivated successfully.
Jan 21 08:39:34 np0005590528 python3.9[46983]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 08:39:34 np0005590528 systemd[1]: Reloading.
Jan 21 08:39:34 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:39:34 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:39:35 np0005590528 systemd[1]: Starting Open vSwitch Database Unit...
Jan 21 08:39:35 np0005590528 chown[47025]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 21 08:39:35 np0005590528 ovs-ctl[47030]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 21 08:39:35 np0005590528 ovs-ctl[47030]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 21 08:39:35 np0005590528 ovs-ctl[47030]: Starting ovsdb-server [  OK  ]
Jan 21 08:39:35 np0005590528 ovs-vsctl[47079]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 21 08:39:35 np0005590528 ovs-vsctl[47099]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"3ade990a-d6f9-4724-a58c-009e4fc34364\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 21 08:39:35 np0005590528 ovs-ctl[47030]: Configuring Open vSwitch system IDs [  OK  ]
Jan 21 08:39:35 np0005590528 ovs-ctl[47030]: Enabling remote OVSDB managers [  OK  ]
Jan 21 08:39:35 np0005590528 systemd[1]: Started Open vSwitch Database Unit.
Jan 21 08:39:35 np0005590528 ovs-vsctl[47105]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 21 08:39:35 np0005590528 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 21 08:39:35 np0005590528 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 21 08:39:35 np0005590528 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 21 08:39:35 np0005590528 kernel: openvswitch: Open vSwitch switching datapath
Jan 21 08:39:35 np0005590528 ovs-ctl[47149]: Inserting openvswitch module [  OK  ]
Jan 21 08:39:35 np0005590528 ovs-ctl[47118]: Starting ovs-vswitchd [  OK  ]
Jan 21 08:39:35 np0005590528 ovs-vsctl[47166]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 21 08:39:35 np0005590528 ovs-ctl[47118]: Enabling remote OVSDB managers [  OK  ]
Jan 21 08:39:35 np0005590528 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 21 08:39:35 np0005590528 systemd[1]: Starting Open vSwitch...
Jan 21 08:39:35 np0005590528 systemd[1]: Finished Open vSwitch.
Jan 21 08:39:36 np0005590528 python3.9[47318]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:39:37 np0005590528 python3.9[47470]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 21 08:39:39 np0005590528 kernel: SELinux:  Converting 2750 SID table entries...
Jan 21 08:39:39 np0005590528 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 08:39:39 np0005590528 kernel: SELinux:  policy capability open_perms=1
Jan 21 08:39:39 np0005590528 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 08:39:39 np0005590528 kernel: SELinux:  policy capability always_check_network=0
Jan 21 08:39:39 np0005590528 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 08:39:39 np0005590528 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 08:39:39 np0005590528 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 08:39:40 np0005590528 python3.9[47625]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:39:41 np0005590528 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 21 08:39:41 np0005590528 python3.9[47783]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:39:43 np0005590528 python3.9[47936]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:39:45 np0005590528 python3.9[48223]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 21 08:39:46 np0005590528 python3.9[48373]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:39:46 np0005590528 python3.9[48527]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:39:48 np0005590528 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 08:39:48 np0005590528 systemd[1]: Starting man-db-cache-update.service...
Jan 21 08:39:48 np0005590528 systemd[1]: Reloading.
Jan 21 08:39:48 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:39:48 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:39:48 np0005590528 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 08:39:49 np0005590528 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 08:39:49 np0005590528 systemd[1]: Finished man-db-cache-update.service.
Jan 21 08:39:49 np0005590528 systemd[1]: run-r2c5a4c05df974a29a01dc217cd218032.service: Deactivated successfully.
Jan 21 08:39:50 np0005590528 python3.9[48843]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:39:50 np0005590528 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 21 08:39:50 np0005590528 systemd[1]: Stopped Network Manager Wait Online.
Jan 21 08:39:50 np0005590528 systemd[1]: Stopping Network Manager Wait Online...
Jan 21 08:39:50 np0005590528 NetworkManager[7188]: <info>  [1769002790.1420] caught SIGTERM, shutting down normally.
Jan 21 08:39:50 np0005590528 systemd[1]: Stopping Network Manager...
Jan 21 08:39:50 np0005590528 NetworkManager[7188]: <info>  [1769002790.1439] dhcp4 (eth0): canceled DHCP transaction
Jan 21 08:39:50 np0005590528 NetworkManager[7188]: <info>  [1769002790.1439] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 08:39:50 np0005590528 NetworkManager[7188]: <info>  [1769002790.1439] dhcp4 (eth0): state changed no lease
Jan 21 08:39:50 np0005590528 NetworkManager[7188]: <info>  [1769002790.1443] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 08:39:50 np0005590528 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 08:39:50 np0005590528 NetworkManager[7188]: <info>  [1769002790.5891] exiting (success)
Jan 21 08:39:50 np0005590528 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 08:39:50 np0005590528 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 21 08:39:50 np0005590528 systemd[1]: Stopped Network Manager.
Jan 21 08:39:50 np0005590528 systemd[1]: NetworkManager.service: Consumed 12.661s CPU time, 4.1M memory peak, read 0B from disk, written 26.5K to disk.
Jan 21 08:39:50 np0005590528 systemd[1]: Starting Network Manager...
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.6936] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3db60b82-452d-4090-8c5d-4863fb6f0cf4)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.6937] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.7010] manager[0x561373e6a000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 08:39:50 np0005590528 systemd[1]: Starting Hostname Service...
Jan 21 08:39:50 np0005590528 systemd[1]: Started Hostname Service.
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.7979] hostname: hostname: using hostnamed
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.7979] hostname: static hostname changed from (none) to "compute-0"
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.7989] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.7996] manager[0x561373e6a000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.7997] manager[0x561373e6a000]: rfkill: WWAN hardware radio set enabled
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8033] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8050] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8051] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8052] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8053] manager: Networking is enabled by state file
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8057] settings: Loaded settings plugin: keyfile (internal)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8063] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8107] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8124] dhcp: init: Using DHCP client 'internal'
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8129] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8137] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8144] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8156] device (lo): Activation: starting connection 'lo' (cb2caf48-e7d3-4014-a1eb-1fea24d085c3)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8166] device (eth0): carrier: link connected
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8173] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8183] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8183] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8193] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8204] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8216] device (eth1): carrier: link connected
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8223] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8231] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (d7910448-f944-5d05-b69e-270d04ed29fa) (indicated)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8232] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8240] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8250] device (eth1): Activation: starting connection 'ci-private-network' (d7910448-f944-5d05-b69e-270d04ed29fa)
Jan 21 08:39:50 np0005590528 systemd[1]: Started Network Manager.
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8260] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8273] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8276] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8278] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8280] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8283] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8284] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8287] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8290] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8296] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8299] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8309] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8322] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8336] dhcp4 (eth0): state changed new lease, address=38.102.83.175
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8343] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8432] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8439] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8440] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8441] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 08:39:50 np0005590528 systemd[1]: Starting Network Manager Wait Online...
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8445] device (lo): Activation: successful, device activated.
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8452] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8454] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8457] device (eth1): Activation: successful, device activated.
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8465] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8466] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8469] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8471] device (eth0): Activation: successful, device activated.
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8476] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 08:39:50 np0005590528 NetworkManager[48860]: <info>  [1769002790.8478] manager: startup complete
Jan 21 08:39:50 np0005590528 systemd[1]: Finished Network Manager Wait Online.
Jan 21 08:39:51 np0005590528 python3.9[49071]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:40:00 np0005590528 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 08:40:02 np0005590528 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 08:40:02 np0005590528 systemd[1]: Starting man-db-cache-update.service...
Jan 21 08:40:02 np0005590528 systemd[1]: Reloading.
Jan 21 08:40:02 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:40:02 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:40:02 np0005590528 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 08:40:03 np0005590528 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 08:40:03 np0005590528 systemd[1]: Finished man-db-cache-update.service.
Jan 21 08:40:03 np0005590528 systemd[1]: run-rcb4219d668064803a9356f4793aff78d.service: Deactivated successfully.
Jan 21 08:40:04 np0005590528 python3.9[49530]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:40:05 np0005590528 python3.9[49682]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:06 np0005590528 python3.9[49836]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:07 np0005590528 python3.9[49988]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:07 np0005590528 python3.9[50140]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:08 np0005590528 python3.9[50292]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:09 np0005590528 python3.9[50444]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:40:09 np0005590528 python3.9[50567]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002808.682205-224-164963269092930/.source _original_basename=.0j3u85f9 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:10 np0005590528 python3.9[50719]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:11 np0005590528 python3.9[50871]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 21 08:40:12 np0005590528 python3.9[51023]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:14 np0005590528 python3.9[51450]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 21 08:40:15 np0005590528 ansible-async_wrapper.py[51625]: Invoked with j629316759519 300 /home/zuul/.ansible/tmp/ansible-tmp-1769002814.7956567-290-206890208182593/AnsiballZ_edpm_os_net_config.py _
Jan 21 08:40:15 np0005590528 ansible-async_wrapper.py[51628]: Starting module and watcher
Jan 21 08:40:15 np0005590528 ansible-async_wrapper.py[51628]: Start watching 51629 (300)
Jan 21 08:40:15 np0005590528 ansible-async_wrapper.py[51629]: Start module (51629)
Jan 21 08:40:15 np0005590528 ansible-async_wrapper.py[51625]: Return async_wrapper task started.
Jan 21 08:40:15 np0005590528 python3.9[51630]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 21 08:40:16 np0005590528 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 21 08:40:16 np0005590528 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 21 08:40:16 np0005590528 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 21 08:40:16 np0005590528 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 21 08:40:16 np0005590528 kernel: cfg80211: failed to load regulatory.db
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0016] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0033] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0562] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0564] audit: op="connection-add" uuid="7423b76c-a3ba-4491-8849-a86ec82668ec" name="br-ex-br" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0583] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0584] audit: op="connection-add" uuid="6796aa1a-41d2-4f88-9ec7-7e010f9b1349" name="br-ex-port" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0599] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0601] audit: op="connection-add" uuid="91b265ce-7b56-4dfe-bfa2-b204c0b4ee75" name="eth1-port" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0614] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0616] audit: op="connection-add" uuid="de9e755b-0037-4ef1-9938-729e7cd85839" name="vlan20-port" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0632] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0633] audit: op="connection-add" uuid="b49f9fbb-58da-49d9-a6f1-b74cf7f1d938" name="vlan21-port" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0650] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0651] audit: op="connection-add" uuid="80289967-c9f5-4863-949c-57bc75b5ef2c" name="vlan22-port" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0666] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0667] audit: op="connection-add" uuid="418ba55d-d3f0-44f7-b662-c15e6c3b4e0a" name="vlan23-port" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0692] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0712] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0714] audit: op="connection-add" uuid="c7da1a0b-8e66-4362-85ca-19fe9b133d18" name="br-ex-if" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0768] audit: op="connection-update" uuid="d7910448-f944-5d05-b69e-270d04ed29fa" name="ci-private-network" args="ovs-external-ids.data,ipv4.never-default,ipv4.addresses,ipv4.dns,ipv4.routing-rules,ipv4.routes,ipv4.method,ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,connection.master,connection.port-type,connection.controller,connection.slave-type,connection.timestamp,ovs-interface.type" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0788] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0791] audit: op="connection-add" uuid="9654f8b1-396e-401c-9a67-52aa262a8521" name="vlan20-if" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0810] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0812] audit: op="connection-add" uuid="07d52a65-2435-4244-b570-d5d7586237c1" name="vlan21-if" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0830] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0832] audit: op="connection-add" uuid="d25fcd17-1bdc-48fe-bd74-94bc01108651" name="vlan22-if" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0853] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0854] audit: op="connection-add" uuid="e25ac4e0-2adc-4361-a477-1395d499c897" name="vlan23-if" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0868] audit: op="connection-delete" uuid="5f608bee-bbd6-3307-abae-f2f56ef54334" name="Wired connection 1" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0882] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.0884] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0893] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0898] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (7423b76c-a3ba-4491-8849-a86ec82668ec)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0899] audit: op="connection-activate" uuid="7423b76c-a3ba-4491-8849-a86ec82668ec" name="br-ex-br" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0901] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.0903] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0909] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0916] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6796aa1a-41d2-4f88-9ec7-7e010f9b1349)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0929] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.0930] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0934] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0939] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (91b265ce-7b56-4dfe-bfa2-b204c0b4ee75)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0941] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.0942] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0947] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0950] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (de9e755b-0037-4ef1-9938-729e7cd85839)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0974] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.0976] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0981] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0984] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b49f9fbb-58da-49d9-a6f1-b74cf7f1d938)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0985] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.0986] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0990] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0994] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (80289967-c9f5-4863-949c-57bc75b5ef2c)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.0996] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.0997] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1001] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1005] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (418ba55d-d3f0-44f7-b662-c15e6c3b4e0a)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1005] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1008] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1009] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1015] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.1016] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1018] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1022] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (c7da1a0b-8e66-4362-85ca-19fe9b133d18)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1023] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1025] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1027] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1028] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1029] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1040] device (eth1): disconnecting for new activation request.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1041] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1044] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1046] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1048] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1052] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.1053] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1057] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1061] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (9654f8b1-396e-401c-9a67-52aa262a8521)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1062] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1065] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1067] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1068] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1071] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.1072] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1076] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1080] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (07d52a65-2435-4244-b570-d5d7586237c1)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1081] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1084] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1086] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1087] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1090] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.1090] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1093] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1097] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (d25fcd17-1bdc-48fe-bd74-94bc01108651)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1098] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1101] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1102] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1103] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1106] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <warn>  [1769002818.1107] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1110] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1114] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (e25ac4e0-2adc-4361-a477-1395d499c897)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1114] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1117] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1119] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1120] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1121] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1134] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1136] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1139] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1140] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1146] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1150] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1154] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 kernel: ovs-system: entered promiscuous mode
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1157] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1159] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1164] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1168] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1170] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1172] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1177] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1183] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 systemd-udevd[51636]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1186] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1188] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1193] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 kernel: Timeout policy base is empty
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1198] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1202] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1204] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1209] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1214] dhcp4 (eth0): canceled DHCP transaction
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1214] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1214] dhcp4 (eth0): state changed no lease
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1216] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1227] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1230] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51631 uid=0 result="fail" reason="Device is not activated"
Jan 21 08:40:18 np0005590528 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1278] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1282] dhcp4 (eth0): state changed new lease, address=38.102.83.175
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1291] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1339] device (eth1): disconnecting for new activation request.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1340] audit: op="connection-activate" uuid="d7910448-f944-5d05-b69e-270d04ed29fa" name="ci-private-network" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1341] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1351] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1377] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51631 uid=0 result="success"
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1377] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1493] device (eth1): Activation: starting connection 'ci-private-network' (d7910448-f944-5d05-b69e-270d04ed29fa)
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1497] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1505] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1508] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1514] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1517] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 kernel: br-ex: entered promiscuous mode
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1521] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1522] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1523] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1524] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1525] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1527] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1547] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1554] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1557] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1560] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1563] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1568] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1572] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1576] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1580] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1584] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1588] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1593] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1598] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1608] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1614] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1627] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1638] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1643] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 kernel: vlan22: entered promiscuous mode
Jan 21 08:40:18 np0005590528 systemd-udevd[51635]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1650] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1654] device (eth1): Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1667] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1669] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1673] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 kernel: vlan20: entered promiscuous mode
Jan 21 08:40:18 np0005590528 kernel: vlan21: entered promiscuous mode
Jan 21 08:40:18 np0005590528 systemd-udevd[51637]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1774] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1790] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1802] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 21 08:40:18 np0005590528 kernel: vlan23: entered promiscuous mode
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1814] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1822] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1824] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1828] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1882] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1888] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1893] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1902] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1909] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1923] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1936] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1945] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1946] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1951] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1959] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1961] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 08:40:18 np0005590528 NetworkManager[48860]: <info>  [1769002818.1964] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 08:40:19 np0005590528 NetworkManager[48860]: <info>  [1769002819.3242] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51631 uid=0 result="success"
Jan 21 08:40:19 np0005590528 NetworkManager[48860]: <info>  [1769002819.4938] checkpoint[0x561373e40950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 21 08:40:19 np0005590528 NetworkManager[48860]: <info>  [1769002819.4942] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51631 uid=0 result="success"
Jan 21 08:40:19 np0005590528 python3.9[51989]: ansible-ansible.legacy.async_status Invoked with jid=j629316759519.51625 mode=status _async_dir=/root/.ansible_async
Jan 21 08:40:19 np0005590528 NetworkManager[48860]: <info>  [1769002819.8783] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51631 uid=0 result="success"
Jan 21 08:40:19 np0005590528 NetworkManager[48860]: <info>  [1769002819.8797] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51631 uid=0 result="success"
Jan 21 08:40:20 np0005590528 NetworkManager[48860]: <info>  [1769002820.2289] audit: op="networking-control" arg="global-dns-configuration" pid=51631 uid=0 result="success"
Jan 21 08:40:20 np0005590528 NetworkManager[48860]: <info>  [1769002820.2328] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 21 08:40:20 np0005590528 NetworkManager[48860]: <info>  [1769002820.2773] audit: op="networking-control" arg="global-dns-configuration" pid=51631 uid=0 result="success"
Jan 21 08:40:20 np0005590528 NetworkManager[48860]: <info>  [1769002820.3134] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51631 uid=0 result="success"
Jan 21 08:40:20 np0005590528 NetworkManager[48860]: <info>  [1769002820.4920] checkpoint[0x561373e40a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 21 08:40:20 np0005590528 NetworkManager[48860]: <info>  [1769002820.4926] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51631 uid=0 result="success"
Jan 21 08:40:20 np0005590528 ansible-async_wrapper.py[51629]: Module complete (51629)
Jan 21 08:40:20 np0005590528 ansible-async_wrapper.py[51628]: Done in kid B.
Jan 21 08:40:20 np0005590528 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 08:40:23 np0005590528 python3.9[52097]: ansible-ansible.legacy.async_status Invoked with jid=j629316759519.51625 mode=status _async_dir=/root/.ansible_async
Jan 21 08:40:23 np0005590528 python3.9[52197]: ansible-ansible.legacy.async_status Invoked with jid=j629316759519.51625 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 08:40:24 np0005590528 python3.9[52349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:40:25 np0005590528 python3.9[52472]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002823.8523803-317-157814109560431/.source.returncode _original_basename=.auwjidur follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:26 np0005590528 python3.9[52624]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:40:26 np0005590528 python3.9[52748]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002825.5607853-333-145563283559828/.source.cfg _original_basename=.26953fre follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:27 np0005590528 python3.9[52900]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:40:28 np0005590528 systemd[1]: Reloading Network Manager...
Jan 21 08:40:28 np0005590528 NetworkManager[48860]: <info>  [1769002828.7285] audit: op="reload" arg="0" pid=52904 uid=0 result="success"
Jan 21 08:40:28 np0005590528 NetworkManager[48860]: <info>  [1769002828.7293] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 21 08:40:29 np0005590528 systemd[1]: Reloaded Network Manager.
Jan 21 08:40:29 np0005590528 systemd-logind[780]: Session 10 logged out. Waiting for processes to exit.
Jan 21 08:40:29 np0005590528 systemd[1]: session-10.scope: Deactivated successfully.
Jan 21 08:40:29 np0005590528 systemd[1]: session-10.scope: Consumed 53.690s CPU time.
Jan 21 08:40:29 np0005590528 systemd-logind[780]: Removed session 10.
Jan 21 08:40:35 np0005590528 systemd-logind[780]: New session 11 of user zuul.
Jan 21 08:40:35 np0005590528 systemd[1]: Started Session 11 of User zuul.
Jan 21 08:40:36 np0005590528 python3.9[53088]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:40:37 np0005590528 python3.9[53243]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:40:38 np0005590528 python3.9[53436]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:40:38 np0005590528 systemd[1]: session-11.scope: Deactivated successfully.
Jan 21 08:40:38 np0005590528 systemd[1]: session-11.scope: Consumed 2.702s CPU time.
Jan 21 08:40:38 np0005590528 systemd-logind[780]: Session 11 logged out. Waiting for processes to exit.
Jan 21 08:40:38 np0005590528 systemd-logind[780]: Removed session 11.
Jan 21 08:40:39 np0005590528 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 08:40:43 np0005590528 systemd-logind[780]: New session 12 of user zuul.
Jan 21 08:40:43 np0005590528 systemd[1]: Started Session 12 of User zuul.
Jan 21 08:40:45 np0005590528 python3.9[53618]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:40:46 np0005590528 python3.9[53773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:40:47 np0005590528 python3.9[53929]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:40:47 np0005590528 python3.9[54013]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:40:49 np0005590528 python3.9[54167]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:40:51 np0005590528 python3.9[54362]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:52 np0005590528 python3.9[54514]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:40:52 np0005590528 podman[54515]: 2026-01-21 13:40:52.185176148 +0000 UTC m=+0.068195663 system refresh
Jan 21 08:40:53 np0005590528 python3.9[54678]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:40:53 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:40:53 np0005590528 python3.9[54801]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002852.4063685-74-30197939360619/.source.json follow=False _original_basename=podman_network_config.j2 checksum=7d70938f2a5932e44dd49d2f5c65a90ffbde64b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:40:54 np0005590528 python3.9[54953]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:40:55 np0005590528 python3.9[55076]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769002853.9339652-89-265267989045200/.source.conf follow=False _original_basename=registries.conf.j2 checksum=97513ee69a4b3dc3c4fd06acbbcaa9a991e77aee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:40:55 np0005590528 python3.9[55228]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:40:56 np0005590528 python3.9[55380]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:40:57 np0005590528 python3.9[55532]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:40:57 np0005590528 python3.9[55684]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:40:58 np0005590528 python3.9[55836]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:41:00 np0005590528 python3.9[55989]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:41:01 np0005590528 python3.9[56143]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:41:02 np0005590528 python3.9[56295]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:41:03 np0005590528 python3.9[56447]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:41:04 np0005590528 python3.9[56600]: ansible-service_facts Invoked
Jan 21 08:41:04 np0005590528 network[56617]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 08:41:04 np0005590528 network[56618]: 'network-scripts' will be removed from distribution in near future.
Jan 21 08:41:04 np0005590528 network[56619]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 08:41:09 np0005590528 python3.9[57071]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:41:12 np0005590528 python3.9[57224]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 21 08:41:13 np0005590528 python3.9[57376]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:14 np0005590528 python3.9[57501]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002872.9902468-233-248980985567189/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:15 np0005590528 python3.9[57655]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:15 np0005590528 python3.9[57780]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002874.508318-248-196199981318750/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:16 np0005590528 python3.9[57934]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:18 np0005590528 python3.9[58088]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:41:19 np0005590528 python3.9[58172]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:41:20 np0005590528 python3.9[58326]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:41:21 np0005590528 python3.9[58410]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:41:21 np0005590528 chronyd[787]: chronyd exiting
Jan 21 08:41:21 np0005590528 systemd[1]: Stopping NTP client/server...
Jan 21 08:41:21 np0005590528 systemd[1]: chronyd.service: Deactivated successfully.
Jan 21 08:41:21 np0005590528 systemd[1]: Stopped NTP client/server.
Jan 21 08:41:21 np0005590528 systemd[1]: Starting NTP client/server...
Jan 21 08:41:21 np0005590528 chronyd[58418]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 21 08:41:21 np0005590528 chronyd[58418]: Frequency -23.188 +/- 0.457 ppm read from /var/lib/chrony/drift
Jan 21 08:41:21 np0005590528 chronyd[58418]: Loaded seccomp filter (level 2)
Jan 21 08:41:21 np0005590528 systemd[1]: Started NTP client/server.
Jan 21 08:41:21 np0005590528 systemd-logind[780]: Session 12 logged out. Waiting for processes to exit.
Jan 21 08:41:21 np0005590528 systemd[1]: session-12.scope: Deactivated successfully.
Jan 21 08:41:21 np0005590528 systemd[1]: session-12.scope: Consumed 28.583s CPU time.
Jan 21 08:41:21 np0005590528 systemd-logind[780]: Removed session 12.
Jan 21 08:41:27 np0005590528 systemd-logind[780]: New session 13 of user zuul.
Jan 21 08:41:27 np0005590528 systemd[1]: Started Session 13 of User zuul.
Jan 21 08:41:28 np0005590528 python3.9[58599]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:29 np0005590528 python3.9[58751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:30 np0005590528 python3.9[58874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002888.9165962-29-218683674307810/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:30 np0005590528 systemd[1]: session-13.scope: Deactivated successfully.
Jan 21 08:41:30 np0005590528 systemd[1]: session-13.scope: Consumed 1.694s CPU time.
Jan 21 08:41:30 np0005590528 systemd-logind[780]: Session 13 logged out. Waiting for processes to exit.
Jan 21 08:41:30 np0005590528 systemd-logind[780]: Removed session 13.
Jan 21 08:41:35 np0005590528 systemd-logind[780]: New session 14 of user zuul.
Jan 21 08:41:35 np0005590528 systemd[1]: Started Session 14 of User zuul.
Jan 21 08:41:36 np0005590528 python3.9[59052]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:41:38 np0005590528 python3.9[59208]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:39 np0005590528 python3.9[59383]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:39 np0005590528 python3.9[59506]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769002898.3492048-36-259041733745612/.source.json _original_basename=.fa46hlvz follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:40 np0005590528 python3.9[59658]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:41 np0005590528 python3.9[59781]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002900.1830714-59-91009601689435/.source _original_basename=.h_o39y7u follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:41 np0005590528 python3.9[59933]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:41:42 np0005590528 python3.9[60085]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:43 np0005590528 python3.9[60208]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769002902.0245328-83-193828627161121/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:41:43 np0005590528 python3.9[60360]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:44 np0005590528 python3.9[60483]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769002903.262337-83-107574617176738/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:41:44 np0005590528 python3.9[60635]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:45 np0005590528 python3.9[60787]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:46 np0005590528 python3.9[60910]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002905.1376553-120-4260311095035/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:46 np0005590528 python3.9[61062]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:47 np0005590528 python3.9[61185]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002906.4259424-135-30777830353773/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:48 np0005590528 python3.9[61337]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:41:48 np0005590528 systemd[1]: Reloading.
Jan 21 08:41:48 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:41:48 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:41:48 np0005590528 systemd[1]: Reloading.
Jan 21 08:41:48 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:41:48 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:41:49 np0005590528 systemd[1]: Starting EDPM Container Shutdown...
Jan 21 08:41:49 np0005590528 systemd[1]: Finished EDPM Container Shutdown.
Jan 21 08:41:49 np0005590528 python3.9[61565]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:50 np0005590528 python3.9[61688]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002909.3276942-158-196414873739088/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:50 np0005590528 python3.9[61840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:41:51 np0005590528 python3.9[61963]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002910.5342133-173-117210165307911/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:41:52 np0005590528 python3.9[62115]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:41:52 np0005590528 systemd[1]: Reloading.
Jan 21 08:41:52 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:41:52 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:41:53 np0005590528 systemd[1]: Reloading.
Jan 21 08:41:53 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:41:53 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:41:53 np0005590528 systemd[1]: Starting Create netns directory...
Jan 21 08:41:53 np0005590528 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 08:41:53 np0005590528 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 08:41:53 np0005590528 systemd[1]: Finished Create netns directory.
Jan 21 08:41:54 np0005590528 python3.9[62341]: ansible-ansible.builtin.service_facts Invoked
Jan 21 08:41:55 np0005590528 network[62358]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 08:41:55 np0005590528 network[62359]: 'network-scripts' will be removed from distribution in near future.
Jan 21 08:41:55 np0005590528 network[62360]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 08:41:59 np0005590528 python3.9[62622]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:41:59 np0005590528 systemd[1]: Reloading.
Jan 21 08:41:59 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:41:59 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:41:59 np0005590528 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 21 08:41:59 np0005590528 iptables.init[62661]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 21 08:41:59 np0005590528 iptables.init[62661]: iptables: Flushing firewall rules: [  OK  ]
Jan 21 08:41:59 np0005590528 systemd[1]: iptables.service: Deactivated successfully.
Jan 21 08:41:59 np0005590528 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 21 08:42:00 np0005590528 python3.9[62857]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:42:01 np0005590528 python3.9[63011]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:42:02 np0005590528 systemd[1]: Reloading.
Jan 21 08:42:02 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:42:02 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:42:02 np0005590528 systemd[1]: Starting Netfilter Tables...
Jan 21 08:42:02 np0005590528 systemd[1]: Finished Netfilter Tables.
Jan 21 08:42:03 np0005590528 python3.9[63203]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:42:04 np0005590528 python3.9[63356]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:42:05 np0005590528 python3.9[63481]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002924.2587485-242-182667242178865/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:06 np0005590528 python3.9[63634]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:42:06 np0005590528 systemd[1]: Reloading OpenSSH server daemon...
Jan 21 08:42:06 np0005590528 systemd[1]: Reloaded OpenSSH server daemon.
Jan 21 08:42:07 np0005590528 python3.9[63790]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:07 np0005590528 python3.9[63942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:42:08 np0005590528 python3.9[64065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002927.289402-273-275917112385717/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:09 np0005590528 python3.9[64217]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 08:42:09 np0005590528 systemd[1]: Starting Time & Date Service...
Jan 21 08:42:09 np0005590528 systemd[1]: Started Time & Date Service.
Jan 21 08:42:10 np0005590528 python3.9[64373]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:10 np0005590528 python3.9[64525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:42:11 np0005590528 python3.9[64648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002930.4055378-308-133535548911999/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:12 np0005590528 python3.9[64800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:42:12 np0005590528 python3.9[64923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002931.6417792-323-132849077620096/.source.yaml _original_basename=.55wamjgi follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:13 np0005590528 python3.9[65075]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:42:13 np0005590528 python3.9[65198]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002932.8544264-338-84309408430547/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:14 np0005590528 python3.9[65350]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:42:15 np0005590528 python3.9[65503]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:42:16 np0005590528 python3[65656]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 08:42:16 np0005590528 python3.9[65808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:42:17 np0005590528 python3.9[65931]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002936.4407222-377-105466639307213/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:18 np0005590528 python3.9[66083]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:42:18 np0005590528 python3.9[66206]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002937.732308-392-139622077670560/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:19 np0005590528 python3.9[66358]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:42:20 np0005590528 python3.9[66481]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002939.038404-407-232194023447796/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:20 np0005590528 python3.9[66633]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:42:21 np0005590528 python3.9[66756]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002940.3012276-422-25199978685171/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:21 np0005590528 python3.9[66908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:42:22 np0005590528 python3.9[67031]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002941.4947796-437-280791997110998/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:23 np0005590528 python3.9[67183]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:23 np0005590528 python3.9[67335]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:42:24 np0005590528 python3.9[67494]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:25 np0005590528 python3.9[67647]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:26 np0005590528 python3.9[67799]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:27 np0005590528 python3.9[67951]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 08:42:27 np0005590528 python3.9[68104]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 08:42:28 np0005590528 systemd[1]: session-14.scope: Deactivated successfully.
Jan 21 08:42:28 np0005590528 systemd[1]: session-14.scope: Consumed 37.863s CPU time.
Jan 21 08:42:28 np0005590528 systemd-logind[780]: Session 14 logged out. Waiting for processes to exit.
Jan 21 08:42:28 np0005590528 systemd-logind[780]: Removed session 14.
Jan 21 08:42:32 np0005590528 systemd-logind[780]: New session 15 of user zuul.
Jan 21 08:42:32 np0005590528 systemd[1]: Started Session 15 of User zuul.
Jan 21 08:42:33 np0005590528 python3.9[68285]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 21 08:42:34 np0005590528 python3.9[68437]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:42:35 np0005590528 python3.9[68589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:42:36 np0005590528 python3.9[68741]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeFBF9sLBUut0jERuw8eMRSTmHQPq77CYOZnLVmOaBCBCSPbeUxgTSDGAypqgANDFspz2HthTRfZ/0obiaSrheRKp8JI8vmjOkZpbGmM9pA3z2/L+A3dJtYryJ7HhNyc/RGv6tDqg7CqaPNO1VlKkJaCblvoGA/sTsuLgg72/kyPlgz+xxZIIXUolJRTelowGJeLl4FZhJevZEH/0RgRZW5SIe7QgvHYRWR/yATnINpKKPRydWLgea+k//th3RGx9GuUGWuDCPeJvxRKrqAMI8uxmSm/8+i6EK0vVqkOdcdQRVsHY2r6DJ55kbxKE6zwdr/2TWUC4j2L+d8AvLLtPL6yx6yOUDHD9KicyxruiQYYwkskMnkAWJeSL1egxNDFgJCw7P56bEGIyFhPIAzxR1E0ZuAQqv/W1KYFqspYxqjsccWFRon0TW3DyHzXSXRZkvgVBAyZPlZBTcsw58X536t/6unFkYBPfaCNmQIGhaOZ0dFgK7Bl1Jj1cThi6d/bE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINb+axAz9AQLLF8DlI2l4unh/lYce78aEpf6RASalCvh#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHJ6/CEvuTJeUBrk8Nw85tSdtMYRRRBEbjPN601M+Wvbkfd6a4tr5R6VV6/ot3jZ0PwT+0BaXWVuiTlpRpxsLDo=#012 create=True mode=0644 path=/tmp/ansible.tr3wypvt state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:37 np0005590528 python3.9[68893]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tr3wypvt' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:42:38 np0005590528 python3.9[69047]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tr3wypvt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:38 np0005590528 systemd[1]: session-15.scope: Deactivated successfully.
Jan 21 08:42:38 np0005590528 systemd[1]: session-15.scope: Consumed 3.540s CPU time.
Jan 21 08:42:38 np0005590528 systemd-logind[780]: Session 15 logged out. Waiting for processes to exit.
Jan 21 08:42:38 np0005590528 systemd-logind[780]: Removed session 15.
Jan 21 08:42:39 np0005590528 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 08:42:44 np0005590528 systemd-logind[780]: New session 16 of user zuul.
Jan 21 08:42:44 np0005590528 systemd[1]: Started Session 16 of User zuul.
Jan 21 08:42:45 np0005590528 python3.9[69227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:42:46 np0005590528 python3.9[69383]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 08:42:47 np0005590528 python3.9[69537]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:42:48 np0005590528 python3.9[69690]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:42:49 np0005590528 python3.9[69843]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:42:50 np0005590528 python3.9[69997]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:42:50 np0005590528 python3.9[70152]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:42:51 np0005590528 systemd[1]: session-16.scope: Deactivated successfully.
Jan 21 08:42:51 np0005590528 systemd[1]: session-16.scope: Consumed 4.751s CPU time.
Jan 21 08:42:51 np0005590528 systemd-logind[780]: Session 16 logged out. Waiting for processes to exit.
Jan 21 08:42:51 np0005590528 systemd-logind[780]: Removed session 16.
Jan 21 08:42:56 np0005590528 systemd-logind[780]: New session 17 of user zuul.
Jan 21 08:42:56 np0005590528 systemd[1]: Started Session 17 of User zuul.
Jan 21 08:42:57 np0005590528 python3.9[70331]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:42:58 np0005590528 python3.9[70487]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:42:59 np0005590528 python3.9[70571]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 08:43:01 np0005590528 python3.9[70722]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:43:02 np0005590528 python3.9[70873]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 08:43:03 np0005590528 python3.9[71023]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:43:03 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 08:43:03 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 08:43:04 np0005590528 python3.9[71174]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:43:04 np0005590528 systemd[1]: session-17.scope: Deactivated successfully.
Jan 21 08:43:04 np0005590528 systemd[1]: session-17.scope: Consumed 6.076s CPU time.
Jan 21 08:43:04 np0005590528 systemd-logind[780]: Session 17 logged out. Waiting for processes to exit.
Jan 21 08:43:04 np0005590528 systemd-logind[780]: Removed session 17.
Jan 21 08:43:12 np0005590528 systemd-logind[780]: New session 18 of user zuul.
Jan 21 08:43:12 np0005590528 systemd[1]: Started Session 18 of User zuul.
Jan 21 08:43:19 np0005590528 python3[71940]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:43:20 np0005590528 python3[72035]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 08:43:22 np0005590528 python3[72062]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 08:43:22 np0005590528 python3[72088]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:43:22 np0005590528 kernel: loop: module loaded
Jan 21 08:43:22 np0005590528 kernel: loop3: detected capacity change from 0 to 41943040
Jan 21 08:43:23 np0005590528 python3[72123]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:43:23 np0005590528 lvm[72126]: PV /dev/loop3 not used.
Jan 21 08:43:23 np0005590528 lvm[72128]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:43:23 np0005590528 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 21 08:43:23 np0005590528 lvm[72137]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 21 08:43:23 np0005590528 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 21 08:43:23 np0005590528 python3[72215]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:43:24 np0005590528 python3[72288]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003003.6257122-36216-12752240275197/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:43:25 np0005590528 python3[72338]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:43:25 np0005590528 systemd[1]: Reloading.
Jan 21 08:43:25 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:43:25 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:43:25 np0005590528 systemd[1]: Starting Ceph OSD losetup...
Jan 21 08:43:25 np0005590528 bash[72378]: /dev/loop3: [64513]:4328449 (/var/lib/ceph-osd-0.img)
Jan 21 08:43:25 np0005590528 systemd[1]: Finished Ceph OSD losetup.
Jan 21 08:43:25 np0005590528 lvm[72379]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:43:25 np0005590528 lvm[72379]: VG ceph_vg0 finished
Jan 21 08:43:25 np0005590528 python3[72405]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 08:43:27 np0005590528 python3[72432]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 08:43:27 np0005590528 python3[72458]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:43:28 np0005590528 kernel: loop4: detected capacity change from 0 to 41943040
Jan 21 08:43:28 np0005590528 python3[72490]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:43:28 np0005590528 lvm[72493]: PV /dev/loop4 not used.
Jan 21 08:43:28 np0005590528 lvm[72503]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:43:28 np0005590528 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 21 08:43:28 np0005590528 lvm[72505]:  1 logical volume(s) in volume group "ceph_vg1" now active
Jan 21 08:43:28 np0005590528 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 21 08:43:29 np0005590528 python3[72583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:43:29 np0005590528 python3[72656]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003008.9603553-36243-33519948277523/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:43:30 np0005590528 python3[72706]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:43:30 np0005590528 systemd[1]: Reloading.
Jan 21 08:43:30 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:43:30 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:43:30 np0005590528 systemd[1]: Starting Ceph OSD losetup...
Jan 21 08:43:30 np0005590528 bash[72746]: /dev/loop4: [64513]:4579793 (/var/lib/ceph-osd-1.img)
Jan 21 08:43:30 np0005590528 systemd[1]: Finished Ceph OSD losetup.
Jan 21 08:43:30 np0005590528 lvm[72747]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:43:30 np0005590528 lvm[72747]: VG ceph_vg1 finished
Jan 21 08:43:30 np0005590528 python3[72773]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 08:43:31 np0005590528 chronyd[58418]: Selected source 23.133.168.246 (pool.ntp.org)
Jan 21 08:43:32 np0005590528 python3[72800]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 08:43:32 np0005590528 python3[72826]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:43:32 np0005590528 kernel: loop5: detected capacity change from 0 to 41943040
Jan 21 08:43:33 np0005590528 python3[72858]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:43:33 np0005590528 lvm[72861]: PV /dev/loop5 not used.
Jan 21 08:43:33 np0005590528 lvm[72863]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:43:33 np0005590528 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 21 08:43:33 np0005590528 lvm[72867]:  1 logical volume(s) in volume group "ceph_vg2" now active
Jan 21 08:43:33 np0005590528 lvm[72873]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:43:33 np0005590528 lvm[72873]: VG ceph_vg2 finished
Jan 21 08:43:33 np0005590528 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 21 08:43:33 np0005590528 python3[72951]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:43:34 np0005590528 python3[73024]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003013.5964992-36270-127594309988560/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:43:34 np0005590528 python3[73074]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:43:34 np0005590528 systemd[1]: Reloading.
Jan 21 08:43:34 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:43:34 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:43:35 np0005590528 systemd[1]: Starting Ceph OSD losetup...
Jan 21 08:43:35 np0005590528 bash[73114]: /dev/loop5: [64513]:4579797 (/var/lib/ceph-osd-2.img)
Jan 21 08:43:35 np0005590528 systemd[1]: Finished Ceph OSD losetup.
Jan 21 08:43:35 np0005590528 lvm[73115]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:43:35 np0005590528 lvm[73115]: VG ceph_vg2 finished
Jan 21 08:43:37 np0005590528 python3[73139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:43:39 np0005590528 python3[73232]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 08:43:41 np0005590528 python3[73289]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 08:43:45 np0005590528 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 08:43:45 np0005590528 systemd[1]: Starting man-db-cache-update.service...
Jan 21 08:43:45 np0005590528 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 08:43:45 np0005590528 systemd[1]: Finished man-db-cache-update.service.
Jan 21 08:43:45 np0005590528 systemd[1]: run-r6103b12748e64ce5a9ecd0bf27261141.service: Deactivated successfully.
Jan 21 08:43:45 np0005590528 python3[73409]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 08:43:46 np0005590528 python3[73437]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:43:46 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:43:47 np0005590528 python3[73477]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:43:47 np0005590528 python3[73503]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:43:48 np0005590528 python3[73581]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:43:48 np0005590528 python3[73654]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003027.9202132-36418-59116425658073/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:43:49 np0005590528 python3[73756]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:43:49 np0005590528 python3[73829]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003029.1116705-36436-161523195250225/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:43:50 np0005590528 python3[73879]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 08:43:50 np0005590528 python3[73907]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 08:43:50 np0005590528 python3[73935]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 08:43:51 np0005590528 python3[73961]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 08:43:51 np0005590528 python3[73987]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:43:51 np0005590528 systemd-logind[780]: New session 19 of user ceph-admin.
Jan 21 08:43:51 np0005590528 systemd[1]: Created slice User Slice of UID 42477.
Jan 21 08:43:51 np0005590528 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 21 08:43:52 np0005590528 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 21 08:43:52 np0005590528 systemd[1]: Starting User Manager for UID 42477...
Jan 21 08:43:52 np0005590528 systemd[73995]: Queued start job for default target Main User Target.
Jan 21 08:43:52 np0005590528 systemd[73995]: Created slice User Application Slice.
Jan 21 08:43:52 np0005590528 systemd[73995]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 08:43:52 np0005590528 systemd[73995]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 08:43:52 np0005590528 systemd[73995]: Reached target Paths.
Jan 21 08:43:52 np0005590528 systemd[73995]: Reached target Timers.
Jan 21 08:43:52 np0005590528 systemd[73995]: Starting D-Bus User Message Bus Socket...
Jan 21 08:43:52 np0005590528 systemd[73995]: Starting Create User's Volatile Files and Directories...
Jan 21 08:43:52 np0005590528 systemd[73995]: Listening on D-Bus User Message Bus Socket.
Jan 21 08:43:52 np0005590528 systemd[73995]: Reached target Sockets.
Jan 21 08:43:52 np0005590528 systemd[73995]: Finished Create User's Volatile Files and Directories.
Jan 21 08:43:52 np0005590528 systemd[73995]: Reached target Basic System.
Jan 21 08:43:52 np0005590528 systemd[73995]: Reached target Main User Target.
Jan 21 08:43:52 np0005590528 systemd[73995]: Startup finished in 771ms.
Jan 21 08:43:52 np0005590528 systemd[1]: Started User Manager for UID 42477.
Jan 21 08:43:52 np0005590528 systemd[1]: Started Session 19 of User ceph-admin.
Jan 21 08:43:52 np0005590528 systemd[1]: session-19.scope: Deactivated successfully.
Jan 21 08:43:52 np0005590528 systemd-logind[780]: Session 19 logged out. Waiting for processes to exit.
Jan 21 08:43:52 np0005590528 systemd-logind[780]: Removed session 19.
Jan 21 08:43:53 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:43:53 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:43:55 np0005590528 systemd[1]: var-lib-containers-storage-overlay-compat957476698-lower\x2dmapped.mount: Deactivated successfully.
Jan 21 08:44:03 np0005590528 systemd[1]: Stopping User Manager for UID 42477...
Jan 21 08:44:03 np0005590528 systemd[73995]: Activating special unit Exit the Session...
Jan 21 08:44:03 np0005590528 systemd[73995]: Stopped target Main User Target.
Jan 21 08:44:03 np0005590528 systemd[73995]: Stopped target Basic System.
Jan 21 08:44:03 np0005590528 systemd[73995]: Stopped target Paths.
Jan 21 08:44:03 np0005590528 systemd[73995]: Stopped target Sockets.
Jan 21 08:44:03 np0005590528 systemd[73995]: Stopped target Timers.
Jan 21 08:44:03 np0005590528 systemd[73995]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 21 08:44:03 np0005590528 systemd[73995]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 21 08:44:03 np0005590528 systemd[73995]: Closed D-Bus User Message Bus Socket.
Jan 21 08:44:03 np0005590528 systemd[73995]: Stopped Create User's Volatile Files and Directories.
Jan 21 08:44:03 np0005590528 systemd[73995]: Removed slice User Application Slice.
Jan 21 08:44:03 np0005590528 systemd[73995]: Reached target Shutdown.
Jan 21 08:44:03 np0005590528 systemd[73995]: Finished Exit the Session.
Jan 21 08:44:03 np0005590528 systemd[73995]: Reached target Exit the Session.
Jan 21 08:44:03 np0005590528 systemd[1]: user@42477.service: Deactivated successfully.
Jan 21 08:44:03 np0005590528 systemd[1]: Stopped User Manager for UID 42477.
Jan 21 08:44:03 np0005590528 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 21 08:44:03 np0005590528 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 21 08:44:03 np0005590528 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 21 08:44:03 np0005590528 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 21 08:44:03 np0005590528 systemd[1]: Removed slice User Slice of UID 42477.
Jan 21 08:44:11 np0005590528 podman[74090]: 2026-01-21 13:44:11.71765793 +0000 UTC m=+18.513283389 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:11 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:11 np0005590528 podman[74155]: 2026-01-21 13:44:11.823357747 +0000 UTC m=+0.068269173 container create 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:11 np0005590528 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 21 08:44:11 np0005590528 systemd[1]: Started libpod-conmon-7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687.scope.
Jan 21 08:44:11 np0005590528 podman[74155]: 2026-01-21 13:44:11.793841 +0000 UTC m=+0.038752406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:11 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:11 np0005590528 podman[74155]: 2026-01-21 13:44:11.976087787 +0000 UTC m=+0.220999283 container init 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:11 np0005590528 podman[74155]: 2026-01-21 13:44:11.988650147 +0000 UTC m=+0.233561523 container start 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 08:44:11 np0005590528 podman[74155]: 2026-01-21 13:44:11.994270472 +0000 UTC m=+0.239181878 container attach 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 08:44:12 np0005590528 funny_torvalds[74171]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 21 08:44:12 np0005590528 podman[74155]: 2026-01-21 13:44:12.10506107 +0000 UTC m=+0.349972516 container died 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:12 np0005590528 systemd[1]: libpod-7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687.scope: Deactivated successfully.
Jan 21 08:44:12 np0005590528 systemd[1]: var-lib-containers-storage-overlay-34996be08746d30eaf0a2bdb9ba5253a5c5faff7e7f74469be4c4ac0e7708a17-merged.mount: Deactivated successfully.
Jan 21 08:44:12 np0005590528 podman[74155]: 2026-01-21 13:44:12.163381413 +0000 UTC m=+0.408292779 container remove 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:12 np0005590528 systemd[1]: libpod-conmon-7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687.scope: Deactivated successfully.
Jan 21 08:44:12 np0005590528 podman[74189]: 2026-01-21 13:44:12.243105918 +0000 UTC m=+0.056306936 container create 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:12 np0005590528 systemd[1]: Started libpod-conmon-1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302.scope.
Jan 21 08:44:12 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:12 np0005590528 podman[74189]: 2026-01-21 13:44:12.213712036 +0000 UTC m=+0.026913134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:12 np0005590528 podman[74189]: 2026-01-21 13:44:12.321263477 +0000 UTC m=+0.134464565 container init 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:12 np0005590528 podman[74189]: 2026-01-21 13:44:12.333206163 +0000 UTC m=+0.146407191 container start 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:12 np0005590528 interesting_lovelace[74205]: 167 167
Jan 21 08:44:12 np0005590528 systemd[1]: libpod-1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302.scope: Deactivated successfully.
Jan 21 08:44:12 np0005590528 podman[74189]: 2026-01-21 13:44:12.339899602 +0000 UTC m=+0.153100720 container attach 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 08:44:12 np0005590528 podman[74189]: 2026-01-21 13:44:12.340486756 +0000 UTC m=+0.153687814 container died 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 08:44:12 np0005590528 podman[74189]: 2026-01-21 13:44:12.393852242 +0000 UTC m=+0.207053260 container remove 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:12 np0005590528 systemd[1]: libpod-conmon-1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302.scope: Deactivated successfully.
Jan 21 08:44:12 np0005590528 podman[74224]: 2026-01-21 13:44:12.468657609 +0000 UTC m=+0.050762034 container create 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:12 np0005590528 systemd[1]: Started libpod-conmon-849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e.scope.
Jan 21 08:44:12 np0005590528 podman[74224]: 2026-01-21 13:44:12.443890118 +0000 UTC m=+0.025994533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:12 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:12 np0005590528 podman[74224]: 2026-01-21 13:44:12.562813591 +0000 UTC m=+0.144918016 container init 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 08:44:12 np0005590528 podman[74224]: 2026-01-21 13:44:12.570077254 +0000 UTC m=+0.152181639 container start 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 08:44:12 np0005590528 podman[74224]: 2026-01-21 13:44:12.573947457 +0000 UTC m=+0.156051882 container attach 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 08:44:12 np0005590528 elegant_bhaskara[74241]: AQAs2HBpCipMJBAAyTFuLCl0vCqAzsT2sgexTg==
Jan 21 08:44:12 np0005590528 systemd[1]: libpod-849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e.scope: Deactivated successfully.
Jan 21 08:44:12 np0005590528 podman[74224]: 2026-01-21 13:44:12.615743285 +0000 UTC m=+0.197847730 container died 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 08:44:12 np0005590528 podman[74224]: 2026-01-21 13:44:12.663204459 +0000 UTC m=+0.245308844 container remove 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:12 np0005590528 systemd[1]: libpod-conmon-849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e.scope: Deactivated successfully.
Jan 21 08:44:12 np0005590528 podman[74260]: 2026-01-21 13:44:12.744826101 +0000 UTC m=+0.051352569 container create 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 08:44:12 np0005590528 systemd[1]: Started libpod-conmon-602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39.scope.
Jan 21 08:44:12 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:12 np0005590528 podman[74260]: 2026-01-21 13:44:12.803970474 +0000 UTC m=+0.110496962 container init 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:12 np0005590528 podman[74260]: 2026-01-21 13:44:12.811166896 +0000 UTC m=+0.117693354 container start 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 08:44:12 np0005590528 podman[74260]: 2026-01-21 13:44:12.815487129 +0000 UTC m=+0.122013587 container attach 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 21 08:44:12 np0005590528 podman[74260]: 2026-01-21 13:44:12.725180971 +0000 UTC m=+0.031707429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:12 np0005590528 hopeful_matsumoto[74276]: AQAs2HBpQOhsMRAAPu2bYRFMkONDImVzcP6vng==
Jan 21 08:44:12 np0005590528 systemd[1]: libpod-602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39.scope: Deactivated successfully.
Jan 21 08:44:12 np0005590528 podman[74260]: 2026-01-21 13:44:12.832950487 +0000 UTC m=+0.139476955 container died 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:12 np0005590528 systemd[1]: var-lib-containers-storage-overlay-a99926ebcedfe6c32fc300cd18298f898f559e0e8446d1e9840653e757e743cd-merged.mount: Deactivated successfully.
Jan 21 08:44:12 np0005590528 podman[74260]: 2026-01-21 13:44:12.873000674 +0000 UTC m=+0.179527172 container remove 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 08:44:12 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:12 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:12 np0005590528 systemd[1]: libpod-conmon-602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39.scope: Deactivated successfully.
Jan 21 08:44:12 np0005590528 podman[74293]: 2026-01-21 13:44:12.932599369 +0000 UTC m=+0.041584505 container create 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 08:44:12 np0005590528 systemd[1]: Started libpod-conmon-1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2.scope.
Jan 21 08:44:12 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:13 np0005590528 podman[74293]: 2026-01-21 13:44:12.911417933 +0000 UTC m=+0.020403039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:13 np0005590528 podman[74293]: 2026-01-21 13:44:13.290500322 +0000 UTC m=+0.399485428 container init 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 08:44:13 np0005590528 podman[74293]: 2026-01-21 13:44:13.297944531 +0000 UTC m=+0.406929647 container start 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 08:44:13 np0005590528 zen_hypatia[74309]: AQAt2HBpuSE7ExAAoducgVoxob61DucUKgYZ3A==
Jan 21 08:44:13 np0005590528 systemd[1]: libpod-1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2.scope: Deactivated successfully.
Jan 21 08:44:13 np0005590528 podman[74293]: 2026-01-21 13:44:13.681823816 +0000 UTC m=+0.790808952 container attach 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:44:13 np0005590528 podman[74293]: 2026-01-21 13:44:13.682673666 +0000 UTC m=+0.791658812 container died 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:16 np0005590528 systemd[1]: var-lib-containers-storage-overlay-980ba18bb5e689a5268a8fabe1a42bf36aedd8bea917199228d16a615af09c7a-merged.mount: Deactivated successfully.
Jan 21 08:44:16 np0005590528 podman[74293]: 2026-01-21 13:44:16.485217241 +0000 UTC m=+3.594202337 container remove 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:16 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:16 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:16 np0005590528 systemd[1]: libpod-conmon-1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2.scope: Deactivated successfully.
Jan 21 08:44:16 np0005590528 podman[74328]: 2026-01-21 13:44:16.550897881 +0000 UTC m=+0.043984252 container create 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:16 np0005590528 systemd[1]: Started libpod-conmon-6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f.scope.
Jan 21 08:44:16 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d123dd3da2c8bfd14870549295b527d8b4af8a125814b67da258787a7d2d6a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:16 np0005590528 podman[74328]: 2026-01-21 13:44:16.624742326 +0000 UTC m=+0.117828677 container init 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:16 np0005590528 podman[74328]: 2026-01-21 13:44:16.529382957 +0000 UTC m=+0.022469308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:16 np0005590528 podman[74328]: 2026-01-21 13:44:16.629475969 +0000 UTC m=+0.122562340 container start 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 21 08:44:16 np0005590528 podman[74328]: 2026-01-21 13:44:16.634984911 +0000 UTC m=+0.128071292 container attach 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 08:44:16 np0005590528 cool_shaw[74344]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 21 08:44:16 np0005590528 cool_shaw[74344]: setting min_mon_release = tentacle
Jan 21 08:44:16 np0005590528 cool_shaw[74344]: /usr/bin/monmaptool: set fsid to 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:16 np0005590528 cool_shaw[74344]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 21 08:44:16 np0005590528 systemd[1]: libpod-6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f.scope: Deactivated successfully.
Jan 21 08:44:16 np0005590528 podman[74328]: 2026-01-21 13:44:16.669781053 +0000 UTC m=+0.162867444 container died 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:16 np0005590528 podman[74328]: 2026-01-21 13:44:16.709956713 +0000 UTC m=+0.203043054 container remove 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:16 np0005590528 systemd[1]: libpod-conmon-6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f.scope: Deactivated successfully.
Jan 21 08:44:16 np0005590528 podman[74362]: 2026-01-21 13:44:16.775207242 +0000 UTC m=+0.047073505 container create 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:44:16 np0005590528 systemd[1]: Started libpod-conmon-7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6.scope.
Jan 21 08:44:16 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236b6e161ec668905c34368baa586458c05abc49385adaf485a741f17d4b2b2/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236b6e161ec668905c34368baa586458c05abc49385adaf485a741f17d4b2b2/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236b6e161ec668905c34368baa586458c05abc49385adaf485a741f17d4b2b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:16 np0005590528 podman[74362]: 2026-01-21 13:44:16.74960157 +0000 UTC m=+0.021467923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236b6e161ec668905c34368baa586458c05abc49385adaf485a741f17d4b2b2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:16 np0005590528 podman[74362]: 2026-01-21 13:44:16.861151377 +0000 UTC m=+0.133017690 container init 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:16 np0005590528 podman[74362]: 2026-01-21 13:44:16.875254694 +0000 UTC m=+0.147120957 container start 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 08:44:16 np0005590528 podman[74362]: 2026-01-21 13:44:16.879213978 +0000 UTC m=+0.151080281 container attach 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 08:44:16 np0005590528 systemd[1]: libpod-7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6.scope: Deactivated successfully.
Jan 21 08:44:16 np0005590528 podman[74362]: 2026-01-21 13:44:16.979992777 +0000 UTC m=+0.251859050 container died 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 08:44:17 np0005590528 podman[74362]: 2026-01-21 13:44:17.012477384 +0000 UTC m=+0.284343647 container remove 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:17 np0005590528 systemd[1]: libpod-conmon-7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6.scope: Deactivated successfully.
Jan 21 08:44:17 np0005590528 systemd[1]: Reloading.
Jan 21 08:44:17 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:44:17 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:44:17 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:17 np0005590528 systemd[1]: Reloading.
Jan 21 08:44:17 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:44:17 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:44:17 np0005590528 systemd[1]: Reached target All Ceph clusters and services.
Jan 21 08:44:17 np0005590528 systemd[1]: Reloading.
Jan 21 08:44:17 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:44:17 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:44:17 np0005590528 systemd[1]: Reached target Ceph cluster 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:44:17 np0005590528 systemd[1]: Reloading.
Jan 21 08:44:17 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:44:17 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:44:18 np0005590528 systemd[1]: Reloading.
Jan 21 08:44:18 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:44:18 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:44:18 np0005590528 systemd[1]: Created slice Slice /system/ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:44:18 np0005590528 systemd[1]: Reached target System Time Set.
Jan 21 08:44:18 np0005590528 systemd[1]: Reached target System Time Synchronized.
Jan 21 08:44:18 np0005590528 systemd[1]: Starting Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:44:18 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:18 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:18 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:18 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:18 np0005590528 podman[74656]: 2026-01-21 13:44:18.679920828 +0000 UTC m=+0.050850507 container create c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 08:44:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4390f1477dc327414503e549cbc2da0621e1b20c057c3c48dfe97b5f70783145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4390f1477dc327414503e549cbc2da0621e1b20c057c3c48dfe97b5f70783145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4390f1477dc327414503e549cbc2da0621e1b20c057c3c48dfe97b5f70783145/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4390f1477dc327414503e549cbc2da0621e1b20c057c3c48dfe97b5f70783145/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:18 np0005590528 podman[74656]: 2026-01-21 13:44:18.750801942 +0000 UTC m=+0.121731661 container init c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 08:44:18 np0005590528 podman[74656]: 2026-01-21 13:44:18.656319814 +0000 UTC m=+0.027249563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:18 np0005590528 podman[74656]: 2026-01-21 13:44:18.765196156 +0000 UTC m=+0.136125845 container start c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:18 np0005590528 bash[74656]: c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6
Jan 21 08:44:18 np0005590528 systemd[1]: Started Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: pidfile_write: ignore empty --pid-file
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: load: jerasure load: lrc 
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: RocksDB version: 7.9.2
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Git sha 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: DB SUMMARY
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: DB Session ID:  4UCG4RZ2N4ZX2X46OZSC
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: CURRENT file:  CURRENT
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                         Options.error_if_exists: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                       Options.create_if_missing: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                                     Options.env: 0x556ad76b5440
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                                Options.info_log: 0x556ad9d173e0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                              Options.statistics: (nil)
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                               Options.use_fsync: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                              Options.db_log_dir: 
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                                 Options.wal_dir: 
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                    Options.write_buffer_manager: 0x556ad9c96140
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                  Options.unordered_write: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                               Options.row_cache: None
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                              Options.wal_filter: None
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.two_write_queues: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.wal_compression: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.atomic_flush: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.max_background_jobs: 2
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.max_background_compactions: -1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.max_subcompactions: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.max_total_wal_size: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                          Options.max_open_files: -1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:       Options.compaction_readahead_size: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Compression algorithms supported:
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: #011kZSTD supported: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: #011kXpressCompression supported: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: #011kZlibCompression supported: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:           Options.merge_operator: 
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:        Options.compaction_filter: None
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556ad9ca2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556ad9c878d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:        Options.write_buffer_size: 33554432
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:  Options.max_write_buffer_number: 2
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:          Options.compression: NoCompression
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.num_levels: 7
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0890460c-1efa-4b98-b37d-c7b2c3489544
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003058818385, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003058820524, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "4UCG4RZ2N4ZX2X46OZSC", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003058820648, "job": 1, "event": "recovery_finished"}
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556ad9cb4e00
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: DB pointer 0x556ad9e00000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556ad9c878d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@-1(???) e0 preinit fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 21 08:44:18 np0005590528 podman[74676]: 2026-01-21 13:44:18.856192311 +0000 UTC m=+0.051396269 container create cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [DBG] : fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [DBG] : last_changed 2026-01-21T13:44:16.665097+0000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [DBG] : created 2026-01-21T13:44:16.665097+0000
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-21T13:44:16.926757Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).mds e1 new map
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-01-21T13:44:18:859596+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mkfs 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 21 08:44:18 np0005590528 ceph-mon[74675]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 08:44:18 np0005590528 systemd[1]: Started libpod-conmon-cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99.scope.
Jan 21 08:44:18 np0005590528 podman[74676]: 2026-01-21 13:44:18.831704056 +0000 UTC m=+0.026908034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:18 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e951e640ba0057ec35c48a4230c07ff41a1bb6e7af08d448dcd8b309f7f21b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e951e640ba0057ec35c48a4230c07ff41a1bb6e7af08d448dcd8b309f7f21b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e951e640ba0057ec35c48a4230c07ff41a1bb6e7af08d448dcd8b309f7f21b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:18 np0005590528 podman[74676]: 2026-01-21 13:44:18.95488343 +0000 UTC m=+0.150087378 container init cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 08:44:18 np0005590528 podman[74676]: 2026-01-21 13:44:18.967647715 +0000 UTC m=+0.162851703 container start cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 08:44:18 np0005590528 podman[74676]: 2026-01-21 13:44:18.972461451 +0000 UTC m=+0.167665429 container attach cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:44:19 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 21 08:44:19 np0005590528 ceph-mon[74675]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432883261' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:  cluster:
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:    id:     2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:    health: HEALTH_OK
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]: 
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:  services:
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:    mon: 1 daemons, quorum compute-0 (age 0.320852s) [leader: compute-0]
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:    mgr: no daemons active
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:    osd: 0 osds: 0 up, 0 in
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]: 
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:  data:
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:    pools:   0 pools, 0 pgs
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:    objects: 0 objects, 0 B
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:    usage:   0 B used, 0 B / 0 B avail
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]:    pgs:     
Jan 21 08:44:19 np0005590528 admiring_blackwell[74728]: 
Jan 21 08:44:19 np0005590528 systemd[1]: libpod-cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99.scope: Deactivated successfully.
Jan 21 08:44:19 np0005590528 conmon[74728]: conmon cd18b6332c4a5c2a6982 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99.scope/container/memory.events
Jan 21 08:44:19 np0005590528 podman[74676]: 2026-01-21 13:44:19.195491121 +0000 UTC m=+0.390695069 container died cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 08:44:19 np0005590528 podman[74676]: 2026-01-21 13:44:19.241742077 +0000 UTC m=+0.436946065 container remove cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:19 np0005590528 systemd[1]: libpod-conmon-cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99.scope: Deactivated successfully.
Jan 21 08:44:19 np0005590528 podman[74770]: 2026-01-21 13:44:19.315824357 +0000 UTC m=+0.048311976 container create b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 08:44:19 np0005590528 systemd[1]: Started libpod-conmon-b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a.scope.
Jan 21 08:44:19 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a4b3622f9d4bba17cbcf2b06ac58fe662b9748b78a3cddc33c59c7caf77e90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a4b3622f9d4bba17cbcf2b06ac58fe662b9748b78a3cddc33c59c7caf77e90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a4b3622f9d4bba17cbcf2b06ac58fe662b9748b78a3cddc33c59c7caf77e90/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a4b3622f9d4bba17cbcf2b06ac58fe662b9748b78a3cddc33c59c7caf77e90/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:19 np0005590528 podman[74770]: 2026-01-21 13:44:19.383981236 +0000 UTC m=+0.116468875 container init b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 08:44:19 np0005590528 podman[74770]: 2026-01-21 13:44:19.294337593 +0000 UTC m=+0.026825252 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:19 np0005590528 podman[74770]: 2026-01-21 13:44:19.393302869 +0000 UTC m=+0.125790488 container start b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:19 np0005590528 podman[74770]: 2026-01-21 13:44:19.39708476 +0000 UTC m=+0.129572379 container attach b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:19 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 21 08:44:19 np0005590528 ceph-mon[74675]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1859974711' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 08:44:19 np0005590528 ceph-mon[74675]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1859974711' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 08:44:19 np0005590528 friendly_maxwell[74787]: 
Jan 21 08:44:19 np0005590528 friendly_maxwell[74787]: [global]
Jan 21 08:44:19 np0005590528 friendly_maxwell[74787]: #011fsid = 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:19 np0005590528 friendly_maxwell[74787]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 21 08:44:19 np0005590528 friendly_maxwell[74787]: #011osd_crush_chooseleaf_type = 0
Jan 21 08:44:19 np0005590528 systemd[1]: libpod-b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a.scope: Deactivated successfully.
Jan 21 08:44:19 np0005590528 podman[74770]: 2026-01-21 13:44:19.610785977 +0000 UTC m=+0.343273636 container died b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:44:19 np0005590528 systemd[1]: var-lib-containers-storage-overlay-95a4b3622f9d4bba17cbcf2b06ac58fe662b9748b78a3cddc33c59c7caf77e90-merged.mount: Deactivated successfully.
Jan 21 08:44:19 np0005590528 podman[74770]: 2026-01-21 13:44:19.656972691 +0000 UTC m=+0.389460310 container remove b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 08:44:19 np0005590528 systemd[1]: libpod-conmon-b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a.scope: Deactivated successfully.
Jan 21 08:44:19 np0005590528 podman[74825]: 2026-01-21 13:44:19.734165457 +0000 UTC m=+0.048275486 container create fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:19 np0005590528 systemd[1]: Started libpod-conmon-fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd.scope.
Jan 21 08:44:19 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11daf38d5772565f13eeb0d21f13f0860492994920c084c7d1bb14995cbd6b2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11daf38d5772565f13eeb0d21f13f0860492994920c084c7d1bb14995cbd6b2e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11daf38d5772565f13eeb0d21f13f0860492994920c084c7d1bb14995cbd6b2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11daf38d5772565f13eeb0d21f13f0860492994920c084c7d1bb14995cbd6b2e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:19 np0005590528 podman[74825]: 2026-01-21 13:44:19.718464101 +0000 UTC m=+0.032574160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:19 np0005590528 podman[74825]: 2026-01-21 13:44:19.815997812 +0000 UTC m=+0.130107951 container init fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:19 np0005590528 podman[74825]: 2026-01-21 13:44:19.823195884 +0000 UTC m=+0.137305943 container start fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:19 np0005590528 podman[74825]: 2026-01-21 13:44:19.827601819 +0000 UTC m=+0.141711948 container attach fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:19 np0005590528 ceph-mon[74675]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 08:44:19 np0005590528 ceph-mon[74675]: from='client.? 192.168.122.100:0/1859974711' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 08:44:19 np0005590528 ceph-mon[74675]: from='client.? 192.168.122.100:0/1859974711' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 08:44:20 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:44:20 np0005590528 ceph-mon[74675]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3589707557' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:44:20 np0005590528 systemd[1]: libpod-fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd.scope: Deactivated successfully.
Jan 21 08:44:20 np0005590528 podman[74825]: 2026-01-21 13:44:20.094851497 +0000 UTC m=+0.408961596 container died fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 08:44:20 np0005590528 systemd[1]: var-lib-containers-storage-overlay-11daf38d5772565f13eeb0d21f13f0860492994920c084c7d1bb14995cbd6b2e-merged.mount: Deactivated successfully.
Jan 21 08:44:20 np0005590528 podman[74825]: 2026-01-21 13:44:20.145264372 +0000 UTC m=+0.459374411 container remove fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 08:44:20 np0005590528 systemd[1]: libpod-conmon-fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd.scope: Deactivated successfully.
Jan 21 08:44:20 np0005590528 systemd[1]: Stopping Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:44:20 np0005590528 ceph-mon[74675]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 21 08:44:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0[74671]: 2026-01-21T13:44:20.408+0000 7fd1922c0640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 21 08:44:20 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 21 08:44:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0[74671]: 2026-01-21T13:44:20.408+0000 7fd1922c0640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 21 08:44:20 np0005590528 ceph-mon[74675]: mon.compute-0@0(leader) e1 shutdown
Jan 21 08:44:20 np0005590528 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 08:44:20 np0005590528 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 08:44:20 np0005590528 podman[74911]: 2026-01-21 13:44:20.470349062 +0000 UTC m=+0.118694917 container died c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 08:44:20 np0005590528 systemd[1]: var-lib-containers-storage-overlay-4390f1477dc327414503e549cbc2da0621e1b20c057c3c48dfe97b5f70783145-merged.mount: Deactivated successfully.
Jan 21 08:44:20 np0005590528 podman[74911]: 2026-01-21 13:44:20.521339101 +0000 UTC m=+0.169684956 container remove c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:20 np0005590528 bash[74911]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0
Jan 21 08:44:20 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:20 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:20 np0005590528 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 08:44:20 np0005590528 systemd[1]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mon.compute-0.service: Deactivated successfully.
Jan 21 08:44:20 np0005590528 systemd[1]: Stopped Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:44:20 np0005590528 systemd[1]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mon.compute-0.service: Consumed 1.066s CPU time.
Jan 21 08:44:20 np0005590528 systemd[1]: Starting Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:44:20 np0005590528 podman[75012]: 2026-01-21 13:44:20.906480287 +0000 UTC m=+0.035329936 container create cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:20 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529ae71ef349095f305fe3a4b591c8edb0eda6d58e36470657c076e838a2af68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:20 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529ae71ef349095f305fe3a4b591c8edb0eda6d58e36470657c076e838a2af68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:20 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529ae71ef349095f305fe3a4b591c8edb0eda6d58e36470657c076e838a2af68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:20 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529ae71ef349095f305fe3a4b591c8edb0eda6d58e36470657c076e838a2af68/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:20 np0005590528 podman[75012]: 2026-01-21 13:44:20.970577169 +0000 UTC m=+0.099426838 container init cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 08:44:20 np0005590528 podman[75012]: 2026-01-21 13:44:20.98110775 +0000 UTC m=+0.109957399 container start cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:20 np0005590528 bash[75012]: cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649
Jan 21 08:44:20 np0005590528 podman[75012]: 2026-01-21 13:44:20.890496484 +0000 UTC m=+0.019346153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:20 np0005590528 systemd[1]: Started Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: pidfile_write: ignore empty --pid-file
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: load: jerasure load: lrc 
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: RocksDB version: 7.9.2
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Git sha 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: DB SUMMARY
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: DB Session ID:  MNCZ0UYV5GPEBH7LDUF1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: CURRENT file:  CURRENT
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                         Options.error_if_exists: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                       Options.create_if_missing: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                                     Options.env: 0x56223e97f440
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                                Options.info_log: 0x562240bb9e80
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                              Options.statistics: (nil)
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                               Options.use_fsync: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                              Options.db_log_dir: 
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                                 Options.wal_dir: 
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                    Options.write_buffer_manager: 0x562240c04140
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                  Options.unordered_write: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                               Options.row_cache: None
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                              Options.wal_filter: None
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.two_write_queues: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.wal_compression: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.atomic_flush: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.max_background_jobs: 2
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.max_background_compactions: -1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.max_subcompactions: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.max_total_wal_size: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                          Options.max_open_files: -1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:       Options.compaction_readahead_size: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Compression algorithms supported:
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: #011kZSTD supported: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: #011kXpressCompression supported: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: #011kZlibCompression supported: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:           Options.merge_operator: 
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:        Options.compaction_filter: None
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562240c10a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562240bf58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:        Options.write_buffer_size: 33554432
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:  Options.max_write_buffer_number: 2
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:          Options.compression: NoCompression
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.num_levels: 7
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0890460c-1efa-4b98-b37d-c7b2c3489544
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003061030345, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003061035911, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003061, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003061036143, "job": 1, "event": "recovery_finished"}
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562240c22e00
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: DB pointer 0x562240d6c000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.70 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.70 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562240bf58d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@-1(???) e1 preinit fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@-1(???).mds e1 new map
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-01-21T13:44:18:859596+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : last_changed 2026-01-21T13:44:16.665097+0000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : created 2026-01-21T13:44:16.665097+0000
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 21 08:44:21 np0005590528 podman[75032]: 2026-01-21 13:44:21.061665576 +0000 UTC m=+0.047154018 container create 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 21 08:44:21 np0005590528 systemd[1]: Started libpod-conmon-7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c.scope.
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 08:44:21 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:21 np0005590528 podman[75032]: 2026-01-21 13:44:21.040634293 +0000 UTC m=+0.026122745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a76f679d587b444444f0c37b1b300000be6e65ea98d1b1d98a4544b0ed0479/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a76f679d587b444444f0c37b1b300000be6e65ea98d1b1d98a4544b0ed0479/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a76f679d587b444444f0c37b1b300000be6e65ea98d1b1d98a4544b0ed0479/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:21 np0005590528 podman[75032]: 2026-01-21 13:44:21.161725727 +0000 UTC m=+0.147214159 container init 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:21 np0005590528 podman[75032]: 2026-01-21 13:44:21.171981482 +0000 UTC m=+0.157469914 container start 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:21 np0005590528 podman[75032]: 2026-01-21 13:44:21.175298762 +0000 UTC m=+0.160787194 container attach 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 21 08:44:21 np0005590528 systemd[1]: libpod-7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c.scope: Deactivated successfully.
Jan 21 08:44:21 np0005590528 podman[75032]: 2026-01-21 13:44:21.378808176 +0000 UTC m=+0.364296648 container died 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:21 np0005590528 podman[75032]: 2026-01-21 13:44:21.424948169 +0000 UTC m=+0.410436611 container remove 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:21 np0005590528 systemd[1]: libpod-conmon-7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c.scope: Deactivated successfully.
Jan 21 08:44:21 np0005590528 podman[75125]: 2026-01-21 13:44:21.497262637 +0000 UTC m=+0.045759975 container create 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 08:44:21 np0005590528 systemd[1]: Started libpod-conmon-3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7.scope.
Jan 21 08:44:21 np0005590528 podman[75125]: 2026-01-21 13:44:21.478488678 +0000 UTC m=+0.026986016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:21 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f0e85db03366460f6c7dcebaf91587124a66e48f9b8fde2f5f7ce1530fae1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f0e85db03366460f6c7dcebaf91587124a66e48f9b8fde2f5f7ce1530fae1f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f0e85db03366460f6c7dcebaf91587124a66e48f9b8fde2f5f7ce1530fae1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:21 np0005590528 podman[75125]: 2026-01-21 13:44:21.601918909 +0000 UTC m=+0.150416277 container init 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 08:44:21 np0005590528 podman[75125]: 2026-01-21 13:44:21.6078184 +0000 UTC m=+0.156315728 container start 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:21 np0005590528 podman[75125]: 2026-01-21 13:44:21.61160251 +0000 UTC m=+0.160099878 container attach 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 08:44:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 21 08:44:21 np0005590528 systemd[1]: libpod-3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7.scope: Deactivated successfully.
Jan 21 08:44:21 np0005590528 conmon[75142]: conmon 3c5a08e0ec69921da58a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7.scope/container/memory.events
Jan 21 08:44:21 np0005590528 podman[75125]: 2026-01-21 13:44:21.839311253 +0000 UTC m=+0.387808571 container died 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 08:44:21 np0005590528 systemd[1]: var-lib-containers-storage-overlay-99f0e85db03366460f6c7dcebaf91587124a66e48f9b8fde2f5f7ce1530fae1f-merged.mount: Deactivated successfully.
Jan 21 08:44:21 np0005590528 podman[75125]: 2026-01-21 13:44:21.884084093 +0000 UTC m=+0.432581411 container remove 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 08:44:21 np0005590528 systemd[1]: libpod-conmon-3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7.scope: Deactivated successfully.
Jan 21 08:44:21 np0005590528 systemd[1]: Reloading.
Jan 21 08:44:22 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:44:22 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:44:22 np0005590528 systemd[1]: Reloading.
Jan 21 08:44:22 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:44:22 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:44:22 np0005590528 systemd[1]: Starting Ceph mgr.compute-0.tnwklj for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:44:22 np0005590528 podman[75303]: 2026-01-21 13:44:22.715375862 +0000 UTC m=+0.043358507 container create e43620387faca5e1843acf5892e98f1ab1b495216bbbf44f0fb6cf55c32acc3c (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da37902d286d7277952d091c9412e8cde529db529ba1b89f9112efa722ff9172/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da37902d286d7277952d091c9412e8cde529db529ba1b89f9112efa722ff9172/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da37902d286d7277952d091c9412e8cde529db529ba1b89f9112efa722ff9172/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da37902d286d7277952d091c9412e8cde529db529ba1b89f9112efa722ff9172/merged/var/lib/ceph/mgr/ceph-compute-0.tnwklj supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:22 np0005590528 podman[75303]: 2026-01-21 13:44:22.782492536 +0000 UTC m=+0.110475251 container init e43620387faca5e1843acf5892e98f1ab1b495216bbbf44f0fb6cf55c32acc3c (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:22 np0005590528 podman[75303]: 2026-01-21 13:44:22.693397596 +0000 UTC m=+0.021380271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:22 np0005590528 podman[75303]: 2026-01-21 13:44:22.79185991 +0000 UTC m=+0.119842555 container start e43620387faca5e1843acf5892e98f1ab1b495216bbbf44f0fb6cf55c32acc3c (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 08:44:22 np0005590528 bash[75303]: e43620387faca5e1843acf5892e98f1ab1b495216bbbf44f0fb6cf55c32acc3c
Jan 21 08:44:22 np0005590528 systemd[1]: Started Ceph mgr.compute-0.tnwklj for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:44:22 np0005590528 ceph-mgr[75322]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 08:44:22 np0005590528 ceph-mgr[75322]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 21 08:44:22 np0005590528 ceph-mgr[75322]: pidfile_write: ignore empty --pid-file
Jan 21 08:44:22 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'alerts'
Jan 21 08:44:22 np0005590528 podman[75323]: 2026-01-21 13:44:22.884588807 +0000 UTC m=+0.048528612 container create 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:44:22 np0005590528 systemd[1]: Started libpod-conmon-24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c.scope.
Jan 21 08:44:22 np0005590528 podman[75323]: 2026-01-21 13:44:22.863898362 +0000 UTC m=+0.027838167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:22 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'balancer'
Jan 21 08:44:22 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704a60328aec2aafdcc0a6fd3cad5d5e8dbf8d33b05360fd667a6fb3f1458c0c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704a60328aec2aafdcc0a6fd3cad5d5e8dbf8d33b05360fd667a6fb3f1458c0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704a60328aec2aafdcc0a6fd3cad5d5e8dbf8d33b05360fd667a6fb3f1458c0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:22 np0005590528 podman[75323]: 2026-01-21 13:44:22.987119038 +0000 UTC m=+0.151058843 container init 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 08:44:22 np0005590528 podman[75323]: 2026-01-21 13:44:22.994296489 +0000 UTC m=+0.158236264 container start 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:22 np0005590528 podman[75323]: 2026-01-21 13:44:22.998677743 +0000 UTC m=+0.162617548 container attach 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 21 08:44:23 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'cephadm'
Jan 21 08:44:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 08:44:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3090944822' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]: 
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]: {
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "health": {
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "status": "HEALTH_OK",
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "checks": {},
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "mutes": []
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    },
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "election_epoch": 5,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "quorum": [
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        0
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    ],
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "quorum_names": [
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "compute-0"
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    ],
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "quorum_age": 2,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "monmap": {
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "epoch": 1,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "min_mon_release_name": "tentacle",
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "num_mons": 1
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    },
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "osdmap": {
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "epoch": 1,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "num_osds": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "num_up_osds": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "osd_up_since": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "num_in_osds": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "osd_in_since": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "num_remapped_pgs": 0
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    },
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "pgmap": {
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "pgs_by_state": [],
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "num_pgs": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "num_pools": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "num_objects": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "data_bytes": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "bytes_used": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "bytes_avail": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "bytes_total": 0
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    },
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "fsmap": {
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "epoch": 1,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "btime": "2026-01-21T13:44:18:859596+0000",
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "by_rank": [],
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "up:standby": 0
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    },
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "mgrmap": {
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "available": false,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "num_standbys": 0,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "modules": [
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:            "iostat",
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:            "nfs"
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        ],
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "services": {}
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    },
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "servicemap": {
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "epoch": 1,
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "modified": "2026-01-21T13:44:18.861719+0000",
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:        "services": {}
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    },
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]:    "progress_events": {}
Jan 21 08:44:23 np0005590528 dreamy_solomon[75360]: }
Jan 21 08:44:23 np0005590528 systemd[1]: libpod-24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c.scope: Deactivated successfully.
Jan 21 08:44:23 np0005590528 podman[75323]: 2026-01-21 13:44:23.226937369 +0000 UTC m=+0.390877144 container died 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:23 np0005590528 systemd[1]: var-lib-containers-storage-overlay-704a60328aec2aafdcc0a6fd3cad5d5e8dbf8d33b05360fd667a6fb3f1458c0c-merged.mount: Deactivated successfully.
Jan 21 08:44:23 np0005590528 podman[75323]: 2026-01-21 13:44:23.271119886 +0000 UTC m=+0.435059661 container remove 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 08:44:23 np0005590528 systemd[1]: libpod-conmon-24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c.scope: Deactivated successfully.
Jan 21 08:44:23 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'crash'
Jan 21 08:44:23 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'dashboard'
Jan 21 08:44:24 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'devicehealth'
Jan 21 08:44:24 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 08:44:24 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 08:44:24 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 08:44:24 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]:  from numpy import show_config as show_numpy_config
Jan 21 08:44:24 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'influx'
Jan 21 08:44:24 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'insights'
Jan 21 08:44:24 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'iostat'
Jan 21 08:44:25 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'k8sevents'
Jan 21 08:44:25 np0005590528 podman[75410]: 2026-01-21 13:44:25.34940161 +0000 UTC m=+0.051406030 container create 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:25 np0005590528 systemd[1]: Started libpod-conmon-6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220.scope.
Jan 21 08:44:25 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'localpool'
Jan 21 08:44:25 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cad89fb347d284ebf9f37716b3907d0c0465e2c39d4f384be10a0773bbcfa3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cad89fb347d284ebf9f37716b3907d0c0465e2c39d4f384be10a0773bbcfa3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cad89fb347d284ebf9f37716b3907d0c0465e2c39d4f384be10a0773bbcfa3c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:25 np0005590528 podman[75410]: 2026-01-21 13:44:25.326259256 +0000 UTC m=+0.028263786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:25 np0005590528 podman[75410]: 2026-01-21 13:44:25.436867469 +0000 UTC m=+0.138871919 container init 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:25 np0005590528 podman[75410]: 2026-01-21 13:44:25.444469742 +0000 UTC m=+0.146474192 container start 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 08:44:25 np0005590528 podman[75410]: 2026-01-21 13:44:25.44901877 +0000 UTC m=+0.151023200 container attach 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:25 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 08:44:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 08:44:25 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033193989' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]: 
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]: {
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "health": {
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "status": "HEALTH_OK",
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "checks": {},
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "mutes": []
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    },
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "election_epoch": 5,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "quorum": [
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        0
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    ],
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "quorum_names": [
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "compute-0"
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    ],
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "quorum_age": 4,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "monmap": {
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "epoch": 1,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "min_mon_release_name": "tentacle",
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "num_mons": 1
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    },
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "osdmap": {
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "epoch": 1,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "num_osds": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "num_up_osds": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "osd_up_since": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "num_in_osds": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "osd_in_since": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "num_remapped_pgs": 0
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    },
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "pgmap": {
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "pgs_by_state": [],
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "num_pgs": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "num_pools": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "num_objects": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "data_bytes": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "bytes_used": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "bytes_avail": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "bytes_total": 0
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    },
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "fsmap": {
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "epoch": 1,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "btime": "2026-01-21T13:44:18:859596+0000",
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "by_rank": [],
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "up:standby": 0
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    },
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "mgrmap": {
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "available": false,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "num_standbys": 0,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "modules": [
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:            "iostat",
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:            "nfs"
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        ],
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "services": {}
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    },
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "servicemap": {
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "epoch": 1,
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "modified": "2026-01-21T13:44:18.861719+0000",
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:        "services": {}
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    },
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]:    "progress_events": {}
Jan 21 08:44:25 np0005590528 festive_chatelet[75426]: }
Jan 21 08:44:25 np0005590528 systemd[1]: libpod-6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220.scope: Deactivated successfully.
Jan 21 08:44:25 np0005590528 podman[75452]: 2026-01-21 13:44:25.667897582 +0000 UTC m=+0.023191675 container died 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:25 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1cad89fb347d284ebf9f37716b3907d0c0465e2c39d4f384be10a0773bbcfa3c-merged.mount: Deactivated successfully.
Jan 21 08:44:25 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'mirroring'
Jan 21 08:44:25 np0005590528 podman[75452]: 2026-01-21 13:44:25.704336274 +0000 UTC m=+0.059630347 container remove 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 08:44:25 np0005590528 systemd[1]: libpod-conmon-6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220.scope: Deactivated successfully.
Jan 21 08:44:25 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'nfs'
Jan 21 08:44:26 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'orchestrator'
Jan 21 08:44:26 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 08:44:26 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'osd_support'
Jan 21 08:44:26 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 08:44:26 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'progress'
Jan 21 08:44:26 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'prometheus'
Jan 21 08:44:26 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'rbd_support'
Jan 21 08:44:26 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'rgw'
Jan 21 08:44:27 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'rook'
Jan 21 08:44:27 np0005590528 podman[75467]: 2026-01-21 13:44:27.792859142 +0000 UTC m=+0.056484641 container create 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:27 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'selftest'
Jan 21 08:44:27 np0005590528 systemd[1]: Started libpod-conmon-0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c.scope.
Jan 21 08:44:27 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be97d012b72548c1e462a0611ebc46d13d6b413acb3457f335623410e454a20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be97d012b72548c1e462a0611ebc46d13d6b413acb3457f335623410e454a20/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be97d012b72548c1e462a0611ebc46d13d6b413acb3457f335623410e454a20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:27 np0005590528 podman[75467]: 2026-01-21 13:44:27.76723487 +0000 UTC m=+0.030860379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:27 np0005590528 podman[75467]: 2026-01-21 13:44:27.862590559 +0000 UTC m=+0.126216058 container init 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:27 np0005590528 podman[75467]: 2026-01-21 13:44:27.868459749 +0000 UTC m=+0.132085238 container start 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 08:44:27 np0005590528 podman[75467]: 2026-01-21 13:44:27.87227733 +0000 UTC m=+0.135902819 container attach 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:27 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'smb'
Jan 21 08:44:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 08:44:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3162610670' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]: 
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]: {
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "health": {
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "status": "HEALTH_OK",
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "checks": {},
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "mutes": []
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    },
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "election_epoch": 5,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "quorum": [
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        0
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    ],
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "quorum_names": [
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "compute-0"
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    ],
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "quorum_age": 6,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "monmap": {
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "epoch": 1,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "min_mon_release_name": "tentacle",
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "num_mons": 1
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    },
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "osdmap": {
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "epoch": 1,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "num_osds": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "num_up_osds": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "osd_up_since": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "num_in_osds": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "osd_in_since": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "num_remapped_pgs": 0
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    },
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "pgmap": {
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "pgs_by_state": [],
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "num_pgs": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "num_pools": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "num_objects": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "data_bytes": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "bytes_used": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "bytes_avail": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "bytes_total": 0
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    },
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "fsmap": {
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "epoch": 1,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "btime": "2026-01-21T13:44:18:859596+0000",
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "by_rank": [],
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "up:standby": 0
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    },
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "mgrmap": {
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "available": false,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "num_standbys": 0,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "modules": [
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:            "iostat",
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:            "nfs"
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        ],
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "services": {}
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    },
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "servicemap": {
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "epoch": 1,
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "modified": "2026-01-21T13:44:18.861719+0000",
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:        "services": {}
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    },
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]:    "progress_events": {}
Jan 21 08:44:28 np0005590528 vigilant_satoshi[75483]: }
Jan 21 08:44:28 np0005590528 systemd[1]: libpod-0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c.scope: Deactivated successfully.
Jan 21 08:44:28 np0005590528 podman[75467]: 2026-01-21 13:44:28.074872553 +0000 UTC m=+0.338498102 container died 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:28 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2be97d012b72548c1e462a0611ebc46d13d6b413acb3457f335623410e454a20-merged.mount: Deactivated successfully.
Jan 21 08:44:28 np0005590528 podman[75467]: 2026-01-21 13:44:28.129898218 +0000 UTC m=+0.393523717 container remove 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:28 np0005590528 systemd[1]: libpod-conmon-0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c.scope: Deactivated successfully.
Jan 21 08:44:28 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'snap_schedule'
Jan 21 08:44:28 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'stats'
Jan 21 08:44:28 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'status'
Jan 21 08:44:28 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'telegraf'
Jan 21 08:44:28 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'telemetry'
Jan 21 08:44:28 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 08:44:28 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'volumes'
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: ms_deliver_dispatch: unhandled message 0x55ec554ff860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.tnwklj
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr handle_mgr_map Activating!
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr handle_mgr_map I am now activating
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.tnwklj(active, starting, since 0.0158873s)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mds metadata"} : dispatch
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e1 all = 1
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata"} : dispatch
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata"} : dispatch
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.tnwklj", "id": "compute-0.tnwklj"} v 0)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr metadata", "who": "compute-0.tnwklj", "id": "compute-0.tnwklj"} : dispatch
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: balancer
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: crash
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : Manager daemon compute-0.tnwklj is now available
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [balancer INFO root] Starting
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: devicehealth
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [devicehealth INFO root] Starting
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:44:29
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [balancer INFO root] No pools available
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: iostat
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: nfs
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: orchestrator
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: pg_autoscaler
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: progress
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [progress INFO root] Loading...
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [progress INFO root] No stored events to load
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [progress INFO root] Loaded [] historic events
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] recovery thread starting
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] starting setup
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: rbd_support
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: status
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} v 0)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} : dispatch
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: telemetry
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] PerfHandler: starting
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TaskHandler: starting
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} v 0)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} : dispatch
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] setup complete
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: Activating manager daemon compute-0.tnwklj
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: Manager daemon compute-0.tnwklj is now available
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} : dispatch
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} : dispatch
Jan 21 08:44:29 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: volumes
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 21 08:44:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:30 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.tnwklj(active, since 1.03423s)
Jan 21 08:44:30 np0005590528 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:30 np0005590528 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:30 np0005590528 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:30 np0005590528 podman[75598]: 2026-01-21 13:44:30.217098266 +0000 UTC m=+0.055860546 container create 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 08:44:30 np0005590528 systemd[1]: Started libpod-conmon-42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7.scope.
Jan 21 08:44:30 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:30 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81662b206783b6855cf10aca4034cbd730bbae8aa32c4bed910ea1716a0a4b48/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:30 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81662b206783b6855cf10aca4034cbd730bbae8aa32c4bed910ea1716a0a4b48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:30 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81662b206783b6855cf10aca4034cbd730bbae8aa32c4bed910ea1716a0a4b48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:30 np0005590528 podman[75598]: 2026-01-21 13:44:30.198312236 +0000 UTC m=+0.037074476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:30 np0005590528 podman[75598]: 2026-01-21 13:44:30.31264764 +0000 UTC m=+0.151409900 container init 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:30 np0005590528 podman[75598]: 2026-01-21 13:44:30.319178486 +0000 UTC m=+0.157940726 container start 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:30 np0005590528 podman[75598]: 2026-01-21 13:44:30.324077263 +0000 UTC m=+0.162839533 container attach 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 08:44:30 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1340416511' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]: 
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]: {
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "health": {
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "status": "HEALTH_OK",
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "checks": {},
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "mutes": []
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    },
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "election_epoch": 5,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "quorum": [
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        0
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    ],
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "quorum_names": [
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "compute-0"
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    ],
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "quorum_age": 9,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "monmap": {
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "epoch": 1,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "min_mon_release_name": "tentacle",
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "num_mons": 1
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    },
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "osdmap": {
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "epoch": 1,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "num_osds": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "num_up_osds": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "osd_up_since": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "num_in_osds": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "osd_in_since": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "num_remapped_pgs": 0
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    },
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "pgmap": {
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "pgs_by_state": [],
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "num_pgs": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "num_pools": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "num_objects": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "data_bytes": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "bytes_used": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "bytes_avail": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "bytes_total": 0
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    },
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "fsmap": {
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "epoch": 1,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "btime": "2026-01-21T13:44:18:859596+0000",
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "by_rank": [],
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "up:standby": 0
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    },
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "mgrmap": {
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "available": true,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "num_standbys": 0,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "modules": [
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:            "iostat",
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:            "nfs"
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        ],
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "services": {}
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    },
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "servicemap": {
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "epoch": 1,
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "modified": "2026-01-21T13:44:18.861719+0000",
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:        "services": {}
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    },
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]:    "progress_events": {}
Jan 21 08:44:30 np0005590528 affectionate_chaum[75614]: }
Jan 21 08:44:30 np0005590528 systemd[1]: libpod-42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7.scope: Deactivated successfully.
Jan 21 08:44:30 np0005590528 conmon[75614]: conmon 42747becb9a5f7cbb6c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7.scope/container/memory.events
Jan 21 08:44:30 np0005590528 podman[75598]: 2026-01-21 13:44:30.938821186 +0000 UTC m=+0.777583466 container died 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 08:44:30 np0005590528 systemd[1]: var-lib-containers-storage-overlay-81662b206783b6855cf10aca4034cbd730bbae8aa32c4bed910ea1716a0a4b48-merged.mount: Deactivated successfully.
Jan 21 08:44:30 np0005590528 podman[75598]: 2026-01-21 13:44:30.985961363 +0000 UTC m=+0.824723633 container remove 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 08:44:30 np0005590528 systemd[1]: libpod-conmon-42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7.scope: Deactivated successfully.
Jan 21 08:44:31 np0005590528 podman[75652]: 2026-01-21 13:44:31.058457566 +0000 UTC m=+0.046782200 container create ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:44:31 np0005590528 systemd[1]: Started libpod-conmon-ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788.scope.
Jan 21 08:44:31 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79fe2db340ffb494ceda6f8ada31407dfc3bae41dfd01d2efa2a4228711371e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79fe2db340ffb494ceda6f8ada31407dfc3bae41dfd01d2efa2a4228711371e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79fe2db340ffb494ceda6f8ada31407dfc3bae41dfd01d2efa2a4228711371e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79fe2db340ffb494ceda6f8ada31407dfc3bae41dfd01d2efa2a4228711371e/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:31 np0005590528 podman[75652]: 2026-01-21 13:44:31.036195784 +0000 UTC m=+0.024520418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:31 np0005590528 podman[75652]: 2026-01-21 13:44:31.145804293 +0000 UTC m=+0.134128927 container init ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 08:44:31 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:31 np0005590528 podman[75652]: 2026-01-21 13:44:31.155796552 +0000 UTC m=+0.144121156 container start ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 08:44:31 np0005590528 podman[75652]: 2026-01-21 13:44:31.159677895 +0000 UTC m=+0.148002499 container attach ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:31 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:31 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.tnwklj(active, since 2s)
Jan 21 08:44:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 21 08:44:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/926465230' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 08:44:31 np0005590528 affectionate_mcnulty[75668]: 
Jan 21 08:44:31 np0005590528 affectionate_mcnulty[75668]: [global]
Jan 21 08:44:31 np0005590528 affectionate_mcnulty[75668]: #011fsid = 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:44:31 np0005590528 affectionate_mcnulty[75668]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 21 08:44:31 np0005590528 affectionate_mcnulty[75668]: #011osd_crush_chooseleaf_type = 0
Jan 21 08:44:31 np0005590528 systemd[1]: libpod-ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788.scope: Deactivated successfully.
Jan 21 08:44:31 np0005590528 podman[75652]: 2026-01-21 13:44:31.644523403 +0000 UTC m=+0.632848017 container died ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:31 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c79fe2db340ffb494ceda6f8ada31407dfc3bae41dfd01d2efa2a4228711371e-merged.mount: Deactivated successfully.
Jan 21 08:44:31 np0005590528 podman[75652]: 2026-01-21 13:44:31.682641554 +0000 UTC m=+0.670966148 container remove ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:31 np0005590528 systemd[1]: libpod-conmon-ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788.scope: Deactivated successfully.
Jan 21 08:44:31 np0005590528 podman[75705]: 2026-01-21 13:44:31.762163115 +0000 UTC m=+0.050622101 container create 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:31 np0005590528 systemd[1]: Started libpod-conmon-8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940.scope.
Jan 21 08:44:31 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9019d3d50c8722baa4d6272f282342ab6401abcb7fe806e8007be463f284e0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9019d3d50c8722baa4d6272f282342ab6401abcb7fe806e8007be463f284e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9019d3d50c8722baa4d6272f282342ab6401abcb7fe806e8007be463f284e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:31 np0005590528 podman[75705]: 2026-01-21 13:44:31.735324954 +0000 UTC m=+0.023783960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:31 np0005590528 podman[75705]: 2026-01-21 13:44:31.836150774 +0000 UTC m=+0.124609770 container init 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 08:44:31 np0005590528 podman[75705]: 2026-01-21 13:44:31.850113707 +0000 UTC m=+0.138572723 container start 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 08:44:31 np0005590528 podman[75705]: 2026-01-21 13:44:31.854125113 +0000 UTC m=+0.142584119 container attach 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 08:44:32 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/926465230' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 08:44:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 21 08:44:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2844574049' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 21 08:44:33 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:33 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:33 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2844574049' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 21 08:44:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2844574049' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 21 08:44:33 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.tnwklj(active, since 4s)
Jan 21 08:44:33 np0005590528 systemd[1]: libpod-8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940.scope: Deactivated successfully.
Jan 21 08:44:33 np0005590528 conmon[75722]: conmon 8d4ffffe4e948b803d0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940.scope/container/memory.events
Jan 21 08:44:33 np0005590528 podman[75748]: 2026-01-21 13:44:33.33506587 +0000 UTC m=+0.026387231 container died 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:33 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: ignoring --setuser ceph since I am not root
Jan 21 08:44:33 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: ignoring --setgroup ceph since I am not root
Jan 21 08:44:33 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2e9019d3d50c8722baa4d6272f282342ab6401abcb7fe806e8007be463f284e0-merged.mount: Deactivated successfully.
Jan 21 08:44:33 np0005590528 ceph-mgr[75322]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 21 08:44:33 np0005590528 ceph-mgr[75322]: pidfile_write: ignore empty --pid-file
Jan 21 08:44:33 np0005590528 podman[75748]: 2026-01-21 13:44:33.376849209 +0000 UTC m=+0.068170550 container remove 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 08:44:33 np0005590528 systemd[1]: libpod-conmon-8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940.scope: Deactivated successfully.
Jan 21 08:44:33 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'alerts'
Jan 21 08:44:33 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'balancer'
Jan 21 08:44:33 np0005590528 podman[75782]: 2026-01-21 13:44:33.42082081 +0000 UTC m=+0.020681105 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:33 np0005590528 podman[75782]: 2026-01-21 13:44:33.533679587 +0000 UTC m=+0.133539882 container create 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:33 np0005590528 systemd[1]: Started libpod-conmon-4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9.scope.
Jan 21 08:44:33 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'cephadm'
Jan 21 08:44:33 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b1cff38de1756ec5fd604e4eaaffc7ed745825c3dfe883a41687d1e95ac701/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b1cff38de1756ec5fd604e4eaaffc7ed745825c3dfe883a41687d1e95ac701/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b1cff38de1756ec5fd604e4eaaffc7ed745825c3dfe883a41687d1e95ac701/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:33 np0005590528 podman[75782]: 2026-01-21 13:44:33.649871184 +0000 UTC m=+0.249731479 container init 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:33 np0005590528 podman[75782]: 2026-01-21 13:44:33.6559536 +0000 UTC m=+0.255813885 container start 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:33 np0005590528 podman[75782]: 2026-01-21 13:44:33.659514065 +0000 UTC m=+0.259374360 container attach 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 08:44:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 21 08:44:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088586261' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 21 08:44:34 np0005590528 hungry_hamilton[75799]: {
Jan 21 08:44:34 np0005590528 hungry_hamilton[75799]:    "epoch": 5,
Jan 21 08:44:34 np0005590528 hungry_hamilton[75799]:    "available": true,
Jan 21 08:44:34 np0005590528 hungry_hamilton[75799]:    "active_name": "compute-0.tnwklj",
Jan 21 08:44:34 np0005590528 hungry_hamilton[75799]:    "num_standby": 0
Jan 21 08:44:34 np0005590528 hungry_hamilton[75799]: }
Jan 21 08:44:34 np0005590528 systemd[1]: libpod-4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9.scope: Deactivated successfully.
Jan 21 08:44:34 np0005590528 podman[75782]: 2026-01-21 13:44:34.181290077 +0000 UTC m=+0.781150372 container died 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 08:44:34 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b6b1cff38de1756ec5fd604e4eaaffc7ed745825c3dfe883a41687d1e95ac701-merged.mount: Deactivated successfully.
Jan 21 08:44:34 np0005590528 podman[75782]: 2026-01-21 13:44:34.217735128 +0000 UTC m=+0.817595413 container remove 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:34 np0005590528 systemd[1]: libpod-conmon-4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9.scope: Deactivated successfully.
Jan 21 08:44:34 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2844574049' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 21 08:44:34 np0005590528 podman[75848]: 2026-01-21 13:44:34.286860589 +0000 UTC m=+0.048368186 container create 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Jan 21 08:44:34 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'crash'
Jan 21 08:44:34 np0005590528 systemd[1]: Started libpod-conmon-4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de.scope.
Jan 21 08:44:34 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9d745ee400c0247c5d80a711334d76fae5630d6026c6cf06a34c8e7ce8e4af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9d745ee400c0247c5d80a711334d76fae5630d6026c6cf06a34c8e7ce8e4af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9d745ee400c0247c5d80a711334d76fae5630d6026c6cf06a34c8e7ce8e4af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:34 np0005590528 podman[75848]: 2026-01-21 13:44:34.352346955 +0000 UTC m=+0.113854552 container init 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:34 np0005590528 podman[75848]: 2026-01-21 13:44:34.268574893 +0000 UTC m=+0.030082520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:34 np0005590528 podman[75848]: 2026-01-21 13:44:34.358506142 +0000 UTC m=+0.120013739 container start 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:34 np0005590528 podman[75848]: 2026-01-21 13:44:34.366102044 +0000 UTC m=+0.127609681 container attach 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 08:44:34 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'dashboard'
Jan 21 08:44:35 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'devicehealth'
Jan 21 08:44:35 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 08:44:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 08:44:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 08:44:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]:  from numpy import show_config as show_numpy_config
Jan 21 08:44:35 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'influx'
Jan 21 08:44:35 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'insights'
Jan 21 08:44:35 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'iostat'
Jan 21 08:44:35 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'k8sevents'
Jan 21 08:44:35 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'localpool'
Jan 21 08:44:35 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 08:44:36 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'mirroring'
Jan 21 08:44:36 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'nfs'
Jan 21 08:44:36 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'orchestrator'
Jan 21 08:44:36 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 08:44:36 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'osd_support'
Jan 21 08:44:36 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 08:44:36 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'progress'
Jan 21 08:44:36 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'prometheus'
Jan 21 08:44:37 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'rbd_support'
Jan 21 08:44:37 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'rgw'
Jan 21 08:44:37 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'rook'
Jan 21 08:44:38 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'selftest'
Jan 21 08:44:38 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'smb'
Jan 21 08:44:38 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'snap_schedule'
Jan 21 08:44:38 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'stats'
Jan 21 08:44:38 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'status'
Jan 21 08:44:38 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'telegraf'
Jan 21 08:44:38 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'telemetry'
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: mgr[py] Loading python module 'volumes'
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : Active manager daemon compute-0.tnwklj restarted
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.tnwklj
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: ms_deliver_dispatch: unhandled message 0x55ddfbe18000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: mgr handle_mgr_map Activating!
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: mgr handle_mgr_map I am now activating
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.tnwklj(active, starting, since 0.0142623s)
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.tnwklj", "id": "compute-0.tnwklj"} v 0)
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr metadata", "who": "compute-0.tnwklj", "id": "compute-0.tnwklj"} : dispatch
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mds metadata"} : dispatch
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e1 all = 1
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata"} : dispatch
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata"} : dispatch
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: balancer
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : Manager daemon compute-0.tnwklj is now available
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Starting
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:44:39
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:44:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] No pools available
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: Active manager daemon compute-0.tnwklj restarted
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: Activating manager daemon compute-0.tnwklj
Jan 21 08:44:39 np0005590528 ceph-mon[75031]: Manager daemon compute-0.tnwklj is now available
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.tnwklj(active, since 1.41169s)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 21 08:44:40 np0005590528 happy_bell[75863]: {
Jan 21 08:44:40 np0005590528 happy_bell[75863]:    "mgrmap_epoch": 7,
Jan 21 08:44:40 np0005590528 happy_bell[75863]:    "initialized": true
Jan 21 08:44:40 np0005590528 happy_bell[75863]: }
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 21 08:44:40 np0005590528 systemd[1]: libpod-4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de.scope: Deactivated successfully.
Jan 21 08:44:40 np0005590528 podman[75848]: 2026-01-21 13:44:40.933329619 +0000 UTC m=+6.694837276 container died 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: cephadm
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: crash
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: devicehealth
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: iostat
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [devicehealth INFO root] Starting
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: nfs
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: orchestrator
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: pg_autoscaler
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: progress
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [progress INFO root] Loading...
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [progress INFO root] No stored events to load
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [progress INFO root] Loaded [] historic events
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 08:44:40 np0005590528 systemd[1]: var-lib-containers-storage-overlay-fb9d745ee400c0247c5d80a711334d76fae5630d6026c6cf06a34c8e7ce8e4af-merged.mount: Deactivated successfully.
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] recovery thread starting
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] starting setup
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: rbd_support
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: status
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} v 0)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} : dispatch
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: telemetry
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] PerfHandler: starting
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TaskHandler: starting
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} v 0)
Jan 21 08:44:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} : dispatch
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] setup complete
Jan 21 08:44:40 np0005590528 podman[75848]: 2026-01-21 13:44:40.981905173 +0000 UTC m=+6.743412770 container remove 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:40 np0005590528 ceph-mgr[75322]: mgr load Constructed class from module: volumes
Jan 21 08:44:40 np0005590528 systemd[1]: libpod-conmon-4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de.scope: Deactivated successfully.
Jan 21 08:44:41 np0005590528 podman[76007]: 2026-01-21 13:44:41.047043274 +0000 UTC m=+0.042960265 container create f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019911966 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:44:41 np0005590528 systemd[1]: Started libpod-conmon-f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859.scope.
Jan 21 08:44:41 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baeee0fb04c6fa7093a7a794a1448e998a3a8dd5af7847e68df82008701846b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baeee0fb04c6fa7093a7a794a1448e998a3a8dd5af7847e68df82008701846b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baeee0fb04c6fa7093a7a794a1448e998a3a8dd5af7847e68df82008701846b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:41 np0005590528 podman[76007]: 2026-01-21 13:44:41.02718946 +0000 UTC m=+0.023106481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:41 np0005590528 podman[76007]: 2026-01-21 13:44:41.134416701 +0000 UTC m=+0.130333712 container init f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:41 np0005590528 podman[76007]: 2026-01-21 13:44:41.144415214 +0000 UTC m=+0.140332205 container start f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:41 np0005590528 podman[76007]: 2026-01-21 13:44:41.151218301 +0000 UTC m=+0.147135332 container attach f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:41 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1952398831' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: Found migration_current of "None". Setting to last migration.
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} : dispatch
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} : dispatch
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1952398831' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1952398831' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 21 08:44:41 np0005590528 cool_herschel[76024]: module 'orchestrator' is already enabled (always-on)
Jan 21 08:44:41 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.tnwklj(active, since 2s)
Jan 21 08:44:41 np0005590528 systemd[1]: libpod-f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859.scope: Deactivated successfully.
Jan 21 08:44:42 np0005590528 podman[76050]: 2026-01-21 13:44:42.029724334 +0000 UTC m=+0.038818264 container died f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:42 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0baeee0fb04c6fa7093a7a794a1448e998a3a8dd5af7847e68df82008701846b-merged.mount: Deactivated successfully.
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: [cephadm INFO cherrypy.error] [21/Jan/2026:13:44:42] ENGINE Bus STARTING
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : [21/Jan/2026:13:44:42] ENGINE Bus STARTING
Jan 21 08:44:42 np0005590528 podman[76050]: 2026-01-21 13:44:42.076080646 +0000 UTC m=+0.085174566 container remove f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 08:44:42 np0005590528 systemd[1]: libpod-conmon-f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859.scope: Deactivated successfully.
Jan 21 08:44:42 np0005590528 podman[76076]: 2026-01-21 13:44:42.166618419 +0000 UTC m=+0.053697027 container create 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: [cephadm INFO cherrypy.error] [21/Jan/2026:13:44:42] ENGINE Serving on https://192.168.122.100:7150
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : [21/Jan/2026:13:44:42] ENGINE Serving on https://192.168.122.100:7150
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: [cephadm INFO cherrypy.error] [21/Jan/2026:13:44:42] ENGINE Client ('192.168.122.100', 33734) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : [21/Jan/2026:13:44:42] ENGINE Client ('192.168.122.100', 33734) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 08:44:42 np0005590528 systemd[1]: Started libpod-conmon-9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000.scope.
Jan 21 08:44:42 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d437d9abe57daf62bc3014d5486546b11bf12347fca764435d939b63d1533c2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d437d9abe57daf62bc3014d5486546b11bf12347fca764435d939b63d1533c2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d437d9abe57daf62bc3014d5486546b11bf12347fca764435d939b63d1533c2b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:42 np0005590528 podman[76076]: 2026-01-21 13:44:42.139726915 +0000 UTC m=+0.026805613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: [cephadm INFO cherrypy.error] [21/Jan/2026:13:44:42] ENGINE Serving on http://192.168.122.100:8765
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : [21/Jan/2026:13:44:42] ENGINE Serving on http://192.168.122.100:8765
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: [cephadm INFO cherrypy.error] [21/Jan/2026:13:44:42] ENGINE Bus STARTED
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : [21/Jan/2026:13:44:42] ENGINE Bus STARTED
Jan 21 08:44:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 08:44:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 08:44:42 np0005590528 podman[76076]: 2026-01-21 13:44:42.289596035 +0000 UTC m=+0.176674663 container init 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:42 np0005590528 podman[76076]: 2026-01-21 13:44:42.295266486 +0000 UTC m=+0.182345114 container start 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:42 np0005590528 podman[76076]: 2026-01-21 13:44:42.300060904 +0000 UTC m=+0.187139542 container attach 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 21 08:44:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 08:44:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 08:44:42 np0005590528 systemd[1]: libpod-9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000.scope: Deactivated successfully.
Jan 21 08:44:42 np0005590528 podman[76076]: 2026-01-21 13:44:42.849784044 +0000 UTC m=+0.736862732 container died 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 08:44:42 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1952398831' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: [21/Jan/2026:13:44:42] ENGINE Bus STARTING
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: [21/Jan/2026:13:44:42] ENGINE Serving on https://192.168.122.100:7150
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: [21/Jan/2026:13:44:42] ENGINE Client ('192.168.122.100', 33734) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: [21/Jan/2026:13:44:42] ENGINE Serving on http://192.168.122.100:8765
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: [21/Jan/2026:13:44:42] ENGINE Bus STARTED
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:43 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d437d9abe57daf62bc3014d5486546b11bf12347fca764435d939b63d1533c2b-merged.mount: Deactivated successfully.
Jan 21 08:44:43 np0005590528 podman[76076]: 2026-01-21 13:44:43.123904968 +0000 UTC m=+1.010983576 container remove 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 08:44:43 np0005590528 podman[76144]: 2026-01-21 13:44:43.192025561 +0000 UTC m=+0.050196688 container create e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:43 np0005590528 systemd[1]: Started libpod-conmon-e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d.scope.
Jan 21 08:44:43 np0005590528 systemd[1]: libpod-conmon-9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000.scope: Deactivated successfully.
Jan 21 08:44:43 np0005590528 podman[76144]: 2026-01-21 13:44:43.165735935 +0000 UTC m=+0.023906972 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:43 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:43 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507ce1ae8cc3495db02181b62d3254f4a324a22b7e3ad4cd1e9011769b9c6d27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:43 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507ce1ae8cc3495db02181b62d3254f4a324a22b7e3ad4cd1e9011769b9c6d27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:43 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507ce1ae8cc3495db02181b62d3254f4a324a22b7e3ad4cd1e9011769b9c6d27/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:43 np0005590528 podman[76144]: 2026-01-21 13:44:43.288175363 +0000 UTC m=+0.146346400 container init e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:43 np0005590528 podman[76144]: 2026-01-21 13:44:43.292721358 +0000 UTC m=+0.150892355 container start e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:43 np0005590528 podman[76144]: 2026-01-21 13:44:43.297144782 +0000 UTC m=+0.155315859 container attach e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:43 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:43 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Set ssh ssh_user
Jan 21 08:44:43 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 21 08:44:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:43 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Set ssh ssh_config
Jan 21 08:44:43 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 21 08:44:43 np0005590528 ceph-mgr[75322]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 21 08:44:43 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 21 08:44:43 np0005590528 eager_hermann[76161]: ssh user set to ceph-admin. sudo will be used
Jan 21 08:44:43 np0005590528 systemd[1]: libpod-e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d.scope: Deactivated successfully.
Jan 21 08:44:43 np0005590528 podman[76144]: 2026-01-21 13:44:43.79918804 +0000 UTC m=+0.657359067 container died e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 21 08:44:43 np0005590528 systemd[1]: var-lib-containers-storage-overlay-507ce1ae8cc3495db02181b62d3254f4a324a22b7e3ad4cd1e9011769b9c6d27-merged.mount: Deactivated successfully.
Jan 21 08:44:43 np0005590528 podman[76144]: 2026-01-21 13:44:43.85449304 +0000 UTC m=+0.712664037 container remove e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 08:44:43 np0005590528 systemd[1]: libpod-conmon-e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d.scope: Deactivated successfully.
Jan 21 08:44:43 np0005590528 podman[76200]: 2026-01-21 13:44:43.929885236 +0000 UTC m=+0.048868988 container create 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:43 np0005590528 systemd[1]: Started libpod-conmon-6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c.scope.
Jan 21 08:44:43 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:43 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:43 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:43 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:43 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:43 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:43 np0005590528 podman[76200]: 2026-01-21 13:44:43.989270994 +0000 UTC m=+0.108254846 container init 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 08:44:43 np0005590528 podman[76200]: 2026-01-21 13:44:43.996472597 +0000 UTC m=+0.115456399 container start 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 08:44:44 np0005590528 podman[76200]: 2026-01-21 13:44:44.000956741 +0000 UTC m=+0.119940533 container attach 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 08:44:44 np0005590528 podman[76200]: 2026-01-21 13:44:43.909282232 +0000 UTC m=+0.028266084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:44 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 21 08:44:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:44 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 21 08:44:44 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 21 08:44:44 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Set ssh private key
Jan 21 08:44:44 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 21 08:44:44 np0005590528 systemd[1]: libpod-6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c.scope: Deactivated successfully.
Jan 21 08:44:44 np0005590528 podman[76200]: 2026-01-21 13:44:44.422805545 +0000 UTC m=+0.541789337 container died 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:44 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae-merged.mount: Deactivated successfully.
Jan 21 08:44:44 np0005590528 podman[76200]: 2026-01-21 13:44:44.456760559 +0000 UTC m=+0.575744321 container remove 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 08:44:44 np0005590528 systemd[1]: libpod-conmon-6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c.scope: Deactivated successfully.
Jan 21 08:44:44 np0005590528 podman[76254]: 2026-01-21 13:44:44.527338596 +0000 UTC m=+0.044595987 container create 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 08:44:44 np0005590528 systemd[1]: Started libpod-conmon-98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856.scope.
Jan 21 08:44:44 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:44 np0005590528 podman[76254]: 2026-01-21 13:44:44.508066071 +0000 UTC m=+0.025323482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:44 np0005590528 podman[76254]: 2026-01-21 13:44:44.613524117 +0000 UTC m=+0.130781518 container init 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 08:44:44 np0005590528 podman[76254]: 2026-01-21 13:44:44.618601089 +0000 UTC m=+0.135858520 container start 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 08:44:44 np0005590528 podman[76254]: 2026-01-21 13:44:44.623204146 +0000 UTC m=+0.140461547 container attach 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 08:44:44 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 21 08:44:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:45 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 21 08:44:45 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 21 08:44:45 np0005590528 systemd[1]: libpod-98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856.scope: Deactivated successfully.
Jan 21 08:44:45 np0005590528 podman[76296]: 2026-01-21 13:44:45.076384046 +0000 UTC m=+0.021541498 container died 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:45 np0005590528 ceph-mon[75031]: Set ssh ssh_user
Jan 21 08:44:45 np0005590528 ceph-mon[75031]: Set ssh ssh_config
Jan 21 08:44:45 np0005590528 ceph-mon[75031]: ssh user set to ceph-admin. sudo will be used
Jan 21 08:44:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:45 np0005590528 systemd[1]: var-lib-containers-storage-overlay-7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b-merged.mount: Deactivated successfully.
Jan 21 08:44:45 np0005590528 podman[76296]: 2026-01-21 13:44:45.118196873 +0000 UTC m=+0.063354355 container remove 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:45 np0005590528 systemd[1]: libpod-conmon-98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856.scope: Deactivated successfully.
Jan 21 08:44:45 np0005590528 podman[76311]: 2026-01-21 13:44:45.192731177 +0000 UTC m=+0.047426487 container create bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:45 np0005590528 systemd[1]: Started libpod-conmon-bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5.scope.
Jan 21 08:44:45 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:45 np0005590528 podman[76311]: 2026-01-21 13:44:45.170635582 +0000 UTC m=+0.025330922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1922efcd4f4991e0b6fa0288e3d7ef7a7b55b2c8fc0aede75fb03a2d1646a54c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1922efcd4f4991e0b6fa0288e3d7ef7a7b55b2c8fc0aede75fb03a2d1646a54c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1922efcd4f4991e0b6fa0288e3d7ef7a7b55b2c8fc0aede75fb03a2d1646a54c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:45 np0005590528 podman[76311]: 2026-01-21 13:44:45.299400271 +0000 UTC m=+0.154095561 container init bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 21 08:44:45 np0005590528 podman[76311]: 2026-01-21 13:44:45.306639474 +0000 UTC m=+0.161334764 container start bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:45 np0005590528 podman[76311]: 2026-01-21 13:44:45.316532815 +0000 UTC m=+0.171228105 container attach bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:45 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:45 np0005590528 keen_khorana[76327]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNBJgNZdVoji2xTK2lG5ndIv4X1xEtVZQs4dDQzvfbLwLUVgdqlevKS268jXG1mUgr1C08P9beV70uqaAgOh5fMJOEdYLN/c3D27OCYQnDCyuallCCjQuYs4OcgMQr2aW5xgo7ckrqlSxO4dE/QDi3vpX8q9rntqqDpTTf9oXuzezXWwYnuE9qPIM8yh2VLNxAACRy/jp77AuRd2OsUbQeMawhyHhZy1RBhvvuzTs2CRz1mpVUWiJ+TDrCFNv/LBFfvLhfba/YCmrJu14C/N9eMIEfsgJSJjKcQrLBJF4SrKaJhvea306fBEkZwFfRqi9CKPnptokktbQ7QxzeoYQEgrA1FaG+69xASHcb9mmslk8zmpJMCDLxjNXzXwwr6mnC1l35x8Bh6kyePvmHcXpj07zryJ/AyHwRVyjeQa0Lzz2S2G5CxUn2EQWF0LgrlVjG1BM5hdOjgdDNrDVYUOb9Hooq7BAxqqYD8gWVnsT0QPbitHxPQQglR+6C51Db3d0= zuul@controller
Jan 21 08:44:45 np0005590528 systemd[1]: libpod-bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5.scope: Deactivated successfully.
Jan 21 08:44:45 np0005590528 podman[76311]: 2026-01-21 13:44:45.724808234 +0000 UTC m=+0.579503554 container died bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:45 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1922efcd4f4991e0b6fa0288e3d7ef7a7b55b2c8fc0aede75fb03a2d1646a54c-merged.mount: Deactivated successfully.
Jan 21 08:44:45 np0005590528 podman[76311]: 2026-01-21 13:44:45.779086449 +0000 UTC m=+0.633781739 container remove bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:45 np0005590528 systemd[1]: libpod-conmon-bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5.scope: Deactivated successfully.
Jan 21 08:44:45 np0005590528 podman[76367]: 2026-01-21 13:44:45.8407315 +0000 UTC m=+0.043762596 container create 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 08:44:45 np0005590528 systemd[1]: Started libpod-conmon-1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d.scope.
Jan 21 08:44:45 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62445611cfc9fd346bccf6674962707c0dd7f83d2b8e0bd7b33d26a0fb5732e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62445611cfc9fd346bccf6674962707c0dd7f83d2b8e0bd7b33d26a0fb5732e9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62445611cfc9fd346bccf6674962707c0dd7f83d2b8e0bd7b33d26a0fb5732e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:45 np0005590528 podman[76367]: 2026-01-21 13:44:45.910497326 +0000 UTC m=+0.113528392 container init 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 08:44:45 np0005590528 podman[76367]: 2026-01-21 13:44:45.818221438 +0000 UTC m=+0.021252494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:45 np0005590528 podman[76367]: 2026-01-21 13:44:45.915909223 +0000 UTC m=+0.118940289 container start 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:45 np0005590528 podman[76367]: 2026-01-21 13:44:45.919407873 +0000 UTC m=+0.122438949 container attach 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:44:46 np0005590528 ceph-mon[75031]: Set ssh ssh_identity_key
Jan 21 08:44:46 np0005590528 ceph-mon[75031]: Set ssh private key
Jan 21 08:44:46 np0005590528 ceph-mon[75031]: Set ssh ssh_identity_pub
Jan 21 08:44:46 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:46 np0005590528 systemd[1]: Created slice User Slice of UID 42477.
Jan 21 08:44:46 np0005590528 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 21 08:44:46 np0005590528 systemd-logind[780]: New session 21 of user ceph-admin.
Jan 21 08:44:46 np0005590528 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 21 08:44:46 np0005590528 systemd[1]: Starting User Manager for UID 42477...
Jan 21 08:44:46 np0005590528 systemd[76413]: Queued start job for default target Main User Target.
Jan 21 08:44:46 np0005590528 systemd[76413]: Created slice User Application Slice.
Jan 21 08:44:46 np0005590528 systemd[76413]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 08:44:46 np0005590528 systemd[76413]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 08:44:46 np0005590528 systemd[76413]: Reached target Paths.
Jan 21 08:44:46 np0005590528 systemd[76413]: Reached target Timers.
Jan 21 08:44:46 np0005590528 systemd[76413]: Starting D-Bus User Message Bus Socket...
Jan 21 08:44:46 np0005590528 systemd[76413]: Starting Create User's Volatile Files and Directories...
Jan 21 08:44:46 np0005590528 systemd-logind[780]: New session 23 of user ceph-admin.
Jan 21 08:44:46 np0005590528 systemd[76413]: Listening on D-Bus User Message Bus Socket.
Jan 21 08:44:46 np0005590528 systemd[76413]: Finished Create User's Volatile Files and Directories.
Jan 21 08:44:46 np0005590528 systemd[76413]: Reached target Sockets.
Jan 21 08:44:46 np0005590528 systemd[76413]: Reached target Basic System.
Jan 21 08:44:46 np0005590528 systemd[76413]: Reached target Main User Target.
Jan 21 08:44:46 np0005590528 systemd[76413]: Startup finished in 124ms.
Jan 21 08:44:46 np0005590528 systemd[1]: Started User Manager for UID 42477.
Jan 21 08:44:46 np0005590528 systemd[1]: Started Session 21 of User ceph-admin.
Jan 21 08:44:46 np0005590528 systemd[1]: Started Session 23 of User ceph-admin.
Jan 21 08:44:46 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:47 np0005590528 systemd-logind[780]: New session 24 of user ceph-admin.
Jan 21 08:44:47 np0005590528 systemd[1]: Started Session 24 of User ceph-admin.
Jan 21 08:44:47 np0005590528 systemd-logind[780]: New session 25 of user ceph-admin.
Jan 21 08:44:47 np0005590528 systemd[1]: Started Session 25 of User ceph-admin.
Jan 21 08:44:47 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:47 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 21 08:44:47 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 21 08:44:47 np0005590528 systemd-logind[780]: New session 26 of user ceph-admin.
Jan 21 08:44:47 np0005590528 systemd[1]: Started Session 26 of User ceph-admin.
Jan 21 08:44:48 np0005590528 systemd-logind[780]: New session 27 of user ceph-admin.
Jan 21 08:44:48 np0005590528 systemd[1]: Started Session 27 of User ceph-admin.
Jan 21 08:44:48 np0005590528 ceph-mon[75031]: Deploying cephadm binary to compute-0
Jan 21 08:44:48 np0005590528 systemd-logind[780]: New session 28 of user ceph-admin.
Jan 21 08:44:48 np0005590528 systemd[1]: Started Session 28 of User ceph-admin.
Jan 21 08:44:48 np0005590528 systemd-logind[780]: New session 29 of user ceph-admin.
Jan 21 08:44:48 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:48 np0005590528 systemd[1]: Started Session 29 of User ceph-admin.
Jan 21 08:44:49 np0005590528 systemd-logind[780]: New session 30 of user ceph-admin.
Jan 21 08:44:49 np0005590528 systemd[1]: Started Session 30 of User ceph-admin.
Jan 21 08:44:49 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:49 np0005590528 systemd-logind[780]: New session 31 of user ceph-admin.
Jan 21 08:44:49 np0005590528 systemd[1]: Started Session 31 of User ceph-admin.
Jan 21 08:44:50 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054703 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:44:51 np0005590528 systemd-logind[780]: New session 32 of user ceph-admin.
Jan 21 08:44:51 np0005590528 systemd[1]: Started Session 32 of User ceph-admin.
Jan 21 08:44:51 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:51 np0005590528 systemd-logind[780]: New session 33 of user ceph-admin.
Jan 21 08:44:51 np0005590528 systemd[1]: Started Session 33 of User ceph-admin.
Jan 21 08:44:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 08:44:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:52 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Added host compute-0
Jan 21 08:44:52 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 21 08:44:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 08:44:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 08:44:52 np0005590528 recursing_mahavira[76383]: Added host 'compute-0' with addr '192.168.122.100'
Jan 21 08:44:52 np0005590528 systemd[1]: libpod-1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d.scope: Deactivated successfully.
Jan 21 08:44:52 np0005590528 podman[76367]: 2026-01-21 13:44:52.088410487 +0000 UTC m=+6.291441543 container died 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:44:52 np0005590528 systemd[1]: var-lib-containers-storage-overlay-62445611cfc9fd346bccf6674962707c0dd7f83d2b8e0bd7b33d26a0fb5732e9-merged.mount: Deactivated successfully.
Jan 21 08:44:52 np0005590528 podman[76367]: 2026-01-21 13:44:52.140255698 +0000 UTC m=+6.343286754 container remove 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 21 08:44:52 np0005590528 systemd[1]: libpod-conmon-1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d.scope: Deactivated successfully.
Jan 21 08:44:52 np0005590528 podman[76821]: 2026-01-21 13:44:52.202919122 +0000 UTC m=+0.041806339 container create 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:52 np0005590528 systemd[1]: Started libpod-conmon-88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf.scope.
Jan 21 08:44:52 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5329de35a6ba6e53fee01dc1a77263d3a90272ff52c47c0be6f023631427f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5329de35a6ba6e53fee01dc1a77263d3a90272ff52c47c0be6f023631427f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5329de35a6ba6e53fee01dc1a77263d3a90272ff52c47c0be6f023631427f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:52 np0005590528 podman[76821]: 2026-01-21 13:44:52.275462308 +0000 UTC m=+0.114349545 container init 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 08:44:52 np0005590528 podman[76821]: 2026-01-21 13:44:52.181522846 +0000 UTC m=+0.020410083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:52 np0005590528 podman[76821]: 2026-01-21 13:44:52.28122079 +0000 UTC m=+0.120108007 container start 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 08:44:52 np0005590528 podman[76821]: 2026-01-21 13:44:52.284761261 +0000 UTC m=+0.123648478 container attach 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:52 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 21 08:44:52 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 21 08:44:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 08:44:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:52 np0005590528 unruffled_haibt[76848]: Scheduled mon update...
Jan 21 08:44:52 np0005590528 systemd[1]: libpod-88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf.scope: Deactivated successfully.
Jan 21 08:44:52 np0005590528 podman[76821]: 2026-01-21 13:44:52.7371517 +0000 UTC m=+0.576038927 container died 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 08:44:52 np0005590528 systemd[1]: var-lib-containers-storage-overlay-eb5329de35a6ba6e53fee01dc1a77263d3a90272ff52c47c0be6f023631427f8-merged.mount: Deactivated successfully.
Jan 21 08:44:52 np0005590528 podman[76821]: 2026-01-21 13:44:52.787207025 +0000 UTC m=+0.626094252 container remove 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:52 np0005590528 systemd[1]: libpod-conmon-88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf.scope: Deactivated successfully.
Jan 21 08:44:52 np0005590528 podman[76912]: 2026-01-21 13:44:52.857391907 +0000 UTC m=+0.044536037 container create 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:52 np0005590528 podman[76865]: 2026-01-21 13:44:52.879817027 +0000 UTC m=+0.497502895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:52 np0005590528 systemd[1]: Started libpod-conmon-2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948.scope.
Jan 21 08:44:52 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:52 np0005590528 podman[76912]: 2026-01-21 13:44:52.837147387 +0000 UTC m=+0.024291607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6acb2758ad73b69991dcfc9dea647f20eeffd71425c7ab8036fc70a28a2008a9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6acb2758ad73b69991dcfc9dea647f20eeffd71425c7ab8036fc70a28a2008a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6acb2758ad73b69991dcfc9dea647f20eeffd71425c7ab8036fc70a28a2008a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:52 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:52 np0005590528 podman[76912]: 2026-01-21 13:44:52.949975429 +0000 UTC m=+0.137119649 container init 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:52 np0005590528 podman[76912]: 2026-01-21 13:44:52.956974089 +0000 UTC m=+0.144118239 container start 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 08:44:52 np0005590528 podman[76912]: 2026-01-21 13:44:52.961896339 +0000 UTC m=+0.149040499 container attach 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:52 np0005590528 podman[76946]: 2026-01-21 13:44:52.986801404 +0000 UTC m=+0.035742530 container create 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:53 np0005590528 systemd[1]: Started libpod-conmon-2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462.scope.
Jan 21 08:44:53 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:53 np0005590528 podman[76946]: 2026-01-21 13:44:53.068289528 +0000 UTC m=+0.117230654 container init 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:53 np0005590528 podman[76946]: 2026-01-21 13:44:52.970902348 +0000 UTC m=+0.019843494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:53 np0005590528 podman[76946]: 2026-01-21 13:44:53.073100497 +0000 UTC m=+0.122041663 container start 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 08:44:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:53 np0005590528 ceph-mon[75031]: Added host compute-0
Jan 21 08:44:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:53 np0005590528 podman[76946]: 2026-01-21 13:44:53.076658427 +0000 UTC m=+0.125599623 container attach 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:53 np0005590528 wonderful_poitras[76964]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 21 08:44:53 np0005590528 systemd[1]: libpod-2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462.scope: Deactivated successfully.
Jan 21 08:44:53 np0005590528 podman[76946]: 2026-01-21 13:44:53.185236188 +0000 UTC m=+0.234177304 container died 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:53 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9ef147c3a38316ba7ab25ac4f80e18fd308bea7e1e1bcf858d38481b22643b1f-merged.mount: Deactivated successfully.
Jan 21 08:44:53 np0005590528 podman[76946]: 2026-01-21 13:44:53.223882659 +0000 UTC m=+0.272823785 container remove 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:53 np0005590528 systemd[1]: libpod-conmon-2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462.scope: Deactivated successfully.
Jan 21 08:44:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 21 08:44:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:53 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:53 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 21 08:44:53 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 21 08:44:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 08:44:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:53 np0005590528 nervous_franklin[76931]: Scheduled mgr update...
Jan 21 08:44:53 np0005590528 systemd[1]: libpod-2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948.scope: Deactivated successfully.
Jan 21 08:44:53 np0005590528 podman[76912]: 2026-01-21 13:44:53.399540568 +0000 UTC m=+0.586684688 container died 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 08:44:53 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6acb2758ad73b69991dcfc9dea647f20eeffd71425c7ab8036fc70a28a2008a9-merged.mount: Deactivated successfully.
Jan 21 08:44:53 np0005590528 podman[76912]: 2026-01-21 13:44:53.496641094 +0000 UTC m=+0.683785224 container remove 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:53 np0005590528 systemd[1]: libpod-conmon-2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948.scope: Deactivated successfully.
Jan 21 08:44:53 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:53 np0005590528 podman[77066]: 2026-01-21 13:44:53.559106016 +0000 UTC m=+0.041916229 container create 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 08:44:53 np0005590528 systemd[1]: Started libpod-conmon-14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e.scope.
Jan 21 08:44:53 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:53 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865fb27c6dc4463e0db2c466d400c9a55cfea9379c78615f38e27f9e2617ed55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:53 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865fb27c6dc4463e0db2c466d400c9a55cfea9379c78615f38e27f9e2617ed55/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:53 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865fb27c6dc4463e0db2c466d400c9a55cfea9379c78615f38e27f9e2617ed55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:53 np0005590528 podman[77066]: 2026-01-21 13:44:53.634044616 +0000 UTC m=+0.116854899 container init 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 08:44:53 np0005590528 podman[77066]: 2026-01-21 13:44:53.541597656 +0000 UTC m=+0.024407879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:53 np0005590528 podman[77066]: 2026-01-21 13:44:53.645229945 +0000 UTC m=+0.128040188 container start 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 08:44:53 np0005590528 podman[77066]: 2026-01-21 13:44:53.649501876 +0000 UTC m=+0.132312089 container attach 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:44:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:44:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:54 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:54 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Saving service crash spec with placement *
Jan 21 08:44:54 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 21 08:44:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 08:44:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:54 np0005590528 condescending_sammet[77084]: Scheduled crash update...
Jan 21 08:44:54 np0005590528 ceph-mon[75031]: Saving service mon spec with placement count:5
Jan 21 08:44:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:54 np0005590528 systemd[1]: libpod-14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e.scope: Deactivated successfully.
Jan 21 08:44:54 np0005590528 podman[77066]: 2026-01-21 13:44:54.396891868 +0000 UTC m=+0.879702101 container died 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:54 np0005590528 podman[77221]: 2026-01-21 13:44:54.424441612 +0000 UTC m=+0.087205386 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:54 np0005590528 systemd[1]: var-lib-containers-storage-overlay-865fb27c6dc4463e0db2c466d400c9a55cfea9379c78615f38e27f9e2617ed55-merged.mount: Deactivated successfully.
Jan 21 08:44:54 np0005590528 podman[77066]: 2026-01-21 13:44:54.449315537 +0000 UTC m=+0.932125740 container remove 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:54 np0005590528 systemd[1]: libpod-conmon-14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e.scope: Deactivated successfully.
Jan 21 08:44:54 np0005590528 podman[77221]: 2026-01-21 13:44:54.515299159 +0000 UTC m=+0.178062933 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:54 np0005590528 podman[77252]: 2026-01-21 13:44:54.537514906 +0000 UTC m=+0.055746598 container create 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 08:44:54 np0005590528 systemd[1]: Started libpod-conmon-1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b.scope.
Jan 21 08:44:54 np0005590528 podman[77252]: 2026-01-21 13:44:54.508508262 +0000 UTC m=+0.026739953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:54 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d8bd823c8b0ffff623068d01fcc3c4c405e76674813b460d3ad954b44cb04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d8bd823c8b0ffff623068d01fcc3c4c405e76674813b460d3ad954b44cb04/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d8bd823c8b0ffff623068d01fcc3c4c405e76674813b460d3ad954b44cb04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:54 np0005590528 podman[77252]: 2026-01-21 13:44:54.618850048 +0000 UTC m=+0.137081719 container init 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 08:44:54 np0005590528 podman[77252]: 2026-01-21 13:44:54.623908119 +0000 UTC m=+0.142139770 container start 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:54 np0005590528 podman[77252]: 2026-01-21 13:44:54.629437118 +0000 UTC m=+0.147668839 container attach 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 08:44:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:44:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:54 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3817281900' entity='client.admin' 
Jan 21 08:44:55 np0005590528 systemd[1]: libpod-1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b.scope: Deactivated successfully.
Jan 21 08:44:55 np0005590528 podman[77252]: 2026-01-21 13:44:55.103183113 +0000 UTC m=+0.621414764 container died 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 08:44:55 np0005590528 systemd[1]: var-lib-containers-storage-overlay-248d8bd823c8b0ffff623068d01fcc3c4c405e76674813b460d3ad954b44cb04-merged.mount: Deactivated successfully.
Jan 21 08:44:55 np0005590528 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77419 (sysctl)
Jan 21 08:44:55 np0005590528 podman[77252]: 2026-01-21 13:44:55.144168938 +0000 UTC m=+0.662400589 container remove 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:55 np0005590528 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 21 08:44:55 np0005590528 systemd[1]: libpod-conmon-1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b.scope: Deactivated successfully.
Jan 21 08:44:55 np0005590528 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 21 08:44:55 np0005590528 podman[77422]: 2026-01-21 13:44:55.223834326 +0000 UTC m=+0.057426951 container create b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 08:44:55 np0005590528 systemd[1]: Started libpod-conmon-b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd.scope.
Jan 21 08:44:55 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03748b1f16ab7d7dbdd282e49648aabda90e655558713999a028316c589814b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03748b1f16ab7d7dbdd282e49648aabda90e655558713999a028316c589814b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03748b1f16ab7d7dbdd282e49648aabda90e655558713999a028316c589814b2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:55 np0005590528 podman[77422]: 2026-01-21 13:44:55.201123492 +0000 UTC m=+0.034716207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:55 np0005590528 podman[77422]: 2026-01-21 13:44:55.304991715 +0000 UTC m=+0.138584360 container init b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 08:44:55 np0005590528 podman[77422]: 2026-01-21 13:44:55.31020874 +0000 UTC m=+0.143801365 container start b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:55 np0005590528 podman[77422]: 2026-01-21 13:44:55.314000993 +0000 UTC m=+0.147593628 container attach b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: Saving service mgr spec with placement count:2
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: Saving service crash spec with placement *
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/3817281900' entity='client.admin' 
Jan 21 08:44:55 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:55 np0005590528 systemd[1]: libpod-b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd.scope: Deactivated successfully.
Jan 21 08:44:55 np0005590528 podman[77422]: 2026-01-21 13:44:55.765027033 +0000 UTC m=+0.598619658 container died b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 08:44:55 np0005590528 systemd[1]: var-lib-containers-storage-overlay-03748b1f16ab7d7dbdd282e49648aabda90e655558713999a028316c589814b2-merged.mount: Deactivated successfully.
Jan 21 08:44:55 np0005590528 podman[77422]: 2026-01-21 13:44:55.814097893 +0000 UTC m=+0.647690518 container remove b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:55 np0005590528 systemd[1]: libpod-conmon-b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd.scope: Deactivated successfully.
Jan 21 08:44:55 np0005590528 podman[77557]: 2026-01-21 13:44:55.884793283 +0000 UTC m=+0.045242506 container create 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:44:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:55 np0005590528 systemd[1]: Started libpod-conmon-29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2.scope.
Jan 21 08:44:55 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:55 np0005590528 podman[77557]: 2026-01-21 13:44:55.867334354 +0000 UTC m=+0.027783597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ea683f36b1b44efca8350a05d70858a3b9c19a93dd85360f3132181991aa52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ea683f36b1b44efca8350a05d70858a3b9c19a93dd85360f3132181991aa52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ea683f36b1b44efca8350a05d70858a3b9c19a93dd85360f3132181991aa52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:55 np0005590528 podman[77557]: 2026-01-21 13:44:55.976689566 +0000 UTC m=+0.137138839 container init 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:55 np0005590528 podman[77557]: 2026-01-21 13:44:55.982824893 +0000 UTC m=+0.143274116 container start 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:55 np0005590528 podman[77557]: 2026-01-21 13:44:55.986498046 +0000 UTC m=+0.146947319 container attach 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:44:56 np0005590528 podman[77666]: 2026-01-21 13:44:56.304396515 +0000 UTC m=+0.055547855 container create 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:56 np0005590528 systemd[1]: Started libpod-conmon-15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a.scope.
Jan 21 08:44:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:44:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 08:44:56 np0005590528 podman[77666]: 2026-01-21 13:44:56.270918647 +0000 UTC m=+0.022070027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:44:56 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:56 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Added label _admin to host compute-0
Jan 21 08:44:56 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 21 08:44:56 np0005590528 romantic_swanson[77601]: Added label _admin to host compute-0
Jan 21 08:44:56 np0005590528 podman[77666]: 2026-01-21 13:44:56.383305661 +0000 UTC m=+0.134457021 container init 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:56 np0005590528 systemd[1]: libpod-29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2.scope: Deactivated successfully.
Jan 21 08:44:56 np0005590528 podman[77557]: 2026-01-21 13:44:56.392600784 +0000 UTC m=+0.553050017 container died 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 08:44:56 np0005590528 podman[77666]: 2026-01-21 13:44:56.394276308 +0000 UTC m=+0.145427638 container start 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:44:56 np0005590528 distracted_banach[77683]: 167 167
Jan 21 08:44:56 np0005590528 systemd[1]: libpod-15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a.scope: Deactivated successfully.
Jan 21 08:44:56 np0005590528 podman[77666]: 2026-01-21 13:44:56.407165842 +0000 UTC m=+0.158317202 container attach 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 08:44:56 np0005590528 podman[77666]: 2026-01-21 13:44:56.407620278 +0000 UTC m=+0.158771608 container died 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:56 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1446e02596ef256c491643f9f5d073a0ae2e4cab120adc6855b36262759246f3-merged.mount: Deactivated successfully.
Jan 21 08:44:56 np0005590528 systemd[1]: var-lib-containers-storage-overlay-19ea683f36b1b44efca8350a05d70858a3b9c19a93dd85360f3132181991aa52-merged.mount: Deactivated successfully.
Jan 21 08:44:56 np0005590528 podman[77557]: 2026-01-21 13:44:56.473382827 +0000 UTC m=+0.633832060 container remove 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:56 np0005590528 podman[77666]: 2026-01-21 13:44:56.480672082 +0000 UTC m=+0.231823412 container remove 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:56 np0005590528 systemd[1]: libpod-conmon-15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a.scope: Deactivated successfully.
Jan 21 08:44:56 np0005590528 systemd[1]: libpod-conmon-29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2.scope: Deactivated successfully.
Jan 21 08:44:56 np0005590528 podman[77713]: 2026-01-21 13:44:56.543283615 +0000 UTC m=+0.046413933 container create a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 08:44:56 np0005590528 systemd[1]: Started libpod-conmon-a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343.scope.
Jan 21 08:44:56 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6685dce33b48c779b26398c0c6a17bca15f484a549f38128527a40a467a70df7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6685dce33b48c779b26398c0c6a17bca15f484a549f38128527a40a467a70df7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6685dce33b48c779b26398c0c6a17bca15f484a549f38128527a40a467a70df7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:56 np0005590528 podman[77713]: 2026-01-21 13:44:56.521413254 +0000 UTC m=+0.024543672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:56 np0005590528 podman[77713]: 2026-01-21 13:44:56.616179786 +0000 UTC m=+0.119310134 container init a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:56 np0005590528 podman[77713]: 2026-01-21 13:44:56.621524633 +0000 UTC m=+0.124654971 container start a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:44:56 np0005590528 podman[77713]: 2026-01-21 13:44:56.627210254 +0000 UTC m=+0.130340672 container attach a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 08:44:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:56 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 21 08:44:57 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1529798143' entity='client.admin' 
Jan 21 08:44:57 np0005590528 magical_snyder[77729]: set mgr/dashboard/cluster/status
Jan 21 08:44:57 np0005590528 systemd[1]: libpod-a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343.scope: Deactivated successfully.
Jan 21 08:44:57 np0005590528 podman[77713]: 2026-01-21 13:44:57.189347441 +0000 UTC m=+0.692477839 container died a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 08:44:57 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6685dce33b48c779b26398c0c6a17bca15f484a549f38128527a40a467a70df7-merged.mount: Deactivated successfully.
Jan 21 08:44:57 np0005590528 podman[77713]: 2026-01-21 13:44:57.232626589 +0000 UTC m=+0.735756937 container remove a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 08:44:57 np0005590528 systemd[1]: libpod-conmon-a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343.scope: Deactivated successfully.
Jan 21 08:44:57 np0005590528 systemd[1]: Reloading.
Jan 21 08:44:57 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:44:57 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:44:57 np0005590528 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 08:44:57 np0005590528 podman[77814]: 2026-01-21 13:44:57.6915212 +0000 UTC m=+0.040222444 container create 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:44:57 np0005590528 systemd[1]: Started libpod-conmon-19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b.scope.
Jan 21 08:44:57 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf78ca496cc18a9bddb2f2db6711b9c8e3551657aee3c1decfc5d560553eafe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf78ca496cc18a9bddb2f2db6711b9c8e3551657aee3c1decfc5d560553eafe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf78ca496cc18a9bddb2f2db6711b9c8e3551657aee3c1decfc5d560553eafe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf78ca496cc18a9bddb2f2db6711b9c8e3551657aee3c1decfc5d560553eafe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:57 np0005590528 podman[77814]: 2026-01-21 13:44:57.674291225 +0000 UTC m=+0.022992469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:44:57 np0005590528 podman[77814]: 2026-01-21 13:44:57.777067162 +0000 UTC m=+0.125768426 container init 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:57 np0005590528 podman[77814]: 2026-01-21 13:44:57.78672427 +0000 UTC m=+0.135425524 container start 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:57 np0005590528 podman[77814]: 2026-01-21 13:44:57.790773988 +0000 UTC m=+0.139475292 container attach 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 08:44:58 np0005590528 python3[77860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:44:58 np0005590528 podman[77866]: 2026-01-21 13:44:58.120373354 +0000 UTC m=+0.040331537 container create 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:44:58 np0005590528 systemd[1]: Started libpod-conmon-54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6.scope.
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: Added label _admin to host compute-0
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1529798143' entity='client.admin' 
Jan 21 08:44:58 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:44:58 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598713f7f5c33ec19ecbaaa0e7067df43ed44c758bfebf4dc9a4d427a489af9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:58 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598713f7f5c33ec19ecbaaa0e7067df43ed44c758bfebf4dc9a4d427a489af9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:44:58 np0005590528 podman[77866]: 2026-01-21 13:44:58.194163407 +0000 UTC m=+0.114121610 container init 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:58 np0005590528 podman[77866]: 2026-01-21 13:44:58.102015442 +0000 UTC m=+0.021973655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:44:58 np0005590528 podman[77866]: 2026-01-21 13:44:58.200319256 +0000 UTC m=+0.120277439 container start 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:44:58 np0005590528 podman[77866]: 2026-01-21 13:44:58.204051338 +0000 UTC m=+0.124009541 container attach 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]: [
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:    {
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        "available": false,
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        "being_replaced": false,
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        "ceph_device_lvm": false,
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        "lsm_data": {},
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        "lvs": [],
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        "path": "/dev/sr0",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        "rejected_reasons": [
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "Insufficient space (<5GB)",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "Has a FileSystem"
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        ],
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        "sys_api": {
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "actuators": null,
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "device_nodes": [
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:                "sr0"
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            ],
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "devname": "sr0",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "human_readable_size": "482.00 KB",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "id_bus": "ata",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "model": "QEMU DVD-ROM",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "nr_requests": "2",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "parent": "/dev/sr0",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "partitions": {},
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "path": "/dev/sr0",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "removable": "1",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "rev": "2.5+",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "ro": "0",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "rotational": "1",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "sas_address": "",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "sas_device_handle": "",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "scheduler_mode": "mq-deadline",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "sectors": 0,
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "sectorsize": "2048",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "size": 493568.0,
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "support_discard": "2048",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "type": "disk",
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:            "vendor": "QEMU"
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:        }
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]:    }
Jan 21 08:44:58 np0005590528 strange_bhabha[77830]: ]
Jan 21 08:44:58 np0005590528 systemd[1]: libpod-19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b.scope: Deactivated successfully.
Jan 21 08:44:58 np0005590528 podman[77814]: 2026-01-21 13:44:58.291547488 +0000 UTC m=+0.640248762 container died 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 08:44:58 np0005590528 systemd[1]: var-lib-containers-storage-overlay-baf78ca496cc18a9bddb2f2db6711b9c8e3551657aee3c1decfc5d560553eafe-merged.mount: Deactivated successfully.
Jan 21 08:44:58 np0005590528 podman[77814]: 2026-01-21 13:44:58.334356629 +0000 UTC m=+0.683057873 container remove 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 08:44:58 np0005590528 systemd[1]: libpod-conmon-19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b.scope: Deactivated successfully.
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:44:58 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 21 08:44:58 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 21 08:44:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3310542890' entity='client.admin' 
Jan 21 08:44:58 np0005590528 systemd[1]: libpod-54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6.scope: Deactivated successfully.
Jan 21 08:44:58 np0005590528 podman[78603]: 2026-01-21 13:44:58.666834847 +0000 UTC m=+0.037290574 container died 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:44:58 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:44:58 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf
Jan 21 08:44:58 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf
Jan 21 08:44:59 np0005590528 systemd[1]: var-lib-containers-storage-overlay-598713f7f5c33ec19ecbaaa0e7067df43ed44c758bfebf4dc9a4d427a489af9e-merged.mount: Deactivated successfully.
Jan 21 08:44:59 np0005590528 podman[78603]: 2026-01-21 13:44:59.043751028 +0000 UTC m=+0.414206715 container remove 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 08:44:59 np0005590528 systemd[1]: libpod-conmon-54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6.scope: Deactivated successfully.
Jan 21 08:44:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:44:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 08:44:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:44:59 np0005590528 ceph-mon[75031]: Updating compute-0:/etc/ceph/ceph.conf
Jan 21 08:44:59 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/3310542890' entity='client.admin' 
Jan 21 08:44:59 np0005590528 ceph-mon[75031]: Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf
Jan 21 08:44:59 np0005590528 ceph-mgr[75322]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 21 08:44:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:44:59 np0005590528 ceph-mon[75031]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 21 08:44:59 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 08:44:59 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 08:44:59 np0005590528 ansible-async_wrapper.py[79262]: Invoked with j808190305125 30 /home/zuul/.ansible/tmp/ansible-tmp-1769003099.4142342-36484-9282059414558/AnsiballZ_command.py _
Jan 21 08:45:00 np0005590528 ansible-async_wrapper.py[79322]: Starting module and watcher
Jan 21 08:45:00 np0005590528 ansible-async_wrapper.py[79322]: Start watching 79325 (30)
Jan 21 08:45:00 np0005590528 ansible-async_wrapper.py[79325]: Start module (79325)
Jan 21 08:45:00 np0005590528 ansible-async_wrapper.py[79262]: Return async_wrapper task started.
Jan 21 08:45:00 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring
Jan 21 08:45:00 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring
Jan 21 08:45:00 np0005590528 python3[79333]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:00 np0005590528 podman[79392]: 2026-01-21 13:45:00.224005401 +0000 UTC m=+0.051225672 container create fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:00 np0005590528 systemd[1]: Started libpod-conmon-fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10.scope.
Jan 21 08:45:00 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:00 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e99922a2c4187a40db912742cc875f27ce87443a7df6171de89a2e09562d212/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:00 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e99922a2c4187a40db912742cc875f27ce87443a7df6171de89a2e09562d212/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:00 np0005590528 podman[79392]: 2026-01-21 13:45:00.202899929 +0000 UTC m=+0.030120240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:00 np0005590528 podman[79392]: 2026-01-21 13:45:00.311274916 +0000 UTC m=+0.138495237 container init fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:00 np0005590528 podman[79392]: 2026-01-21 13:45:00.320288336 +0000 UTC m=+0.147508607 container start fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 08:45:00 np0005590528 podman[79392]: 2026-01-21 13:45:00.32406792 +0000 UTC m=+0.151288221 container attach fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:00 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev 85295e5b-4a25-407c-8383-f553a6b980c4 (Updating crash deployment (+1 -> 1))
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:00 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 21 08:45:00 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 21 08:45:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 08:45:00 np0005590528 agitated_jemison[79446]: 
Jan 21 08:45:00 np0005590528 agitated_jemison[79446]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 08:45:00 np0005590528 systemd[1]: libpod-fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10.scope: Deactivated successfully.
Jan 21 08:45:00 np0005590528 podman[79392]: 2026-01-21 13:45:00.774349629 +0000 UTC m=+0.601569900 container died fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 08:45:00 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6e99922a2c4187a40db912742cc875f27ce87443a7df6171de89a2e09562d212-merged.mount: Deactivated successfully.
Jan 21 08:45:00 np0005590528 podman[79392]: 2026-01-21 13:45:00.817471204 +0000 UTC m=+0.644691485 container remove fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:00 np0005590528 systemd[1]: libpod-conmon-fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10.scope: Deactivated successfully.
Jan 21 08:45:00 np0005590528 ansible-async_wrapper.py[79325]: Module complete (79325)
Jan 21 08:45:00 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:01 np0005590528 podman[79756]: 2026-01-21 13:45:01.257175163 +0000 UTC m=+0.037007669 container create 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:01 np0005590528 systemd[1]: Started libpod-conmon-40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23.scope.
Jan 21 08:45:01 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:01 np0005590528 podman[79756]: 2026-01-21 13:45:01.330033893 +0000 UTC m=+0.109866429 container init 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 21 08:45:01 np0005590528 podman[79756]: 2026-01-21 13:45:01.239334868 +0000 UTC m=+0.019167394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:01 np0005590528 podman[79756]: 2026-01-21 13:45:01.337951206 +0000 UTC m=+0.117783712 container start 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 08:45:01 np0005590528 practical_bhaskara[79792]: 167 167
Jan 21 08:45:01 np0005590528 systemd[1]: libpod-40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23.scope: Deactivated successfully.
Jan 21 08:45:01 np0005590528 podman[79756]: 2026-01-21 13:45:01.342962107 +0000 UTC m=+0.122794633 container attach 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:01 np0005590528 podman[79756]: 2026-01-21 13:45:01.343920901 +0000 UTC m=+0.123753417 container died 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 08:45:01 np0005590528 systemd[1]: var-lib-containers-storage-overlay-abe9148d836ab8cd088f8cf00abee498eeb71d36d46fba320aa6bf7a9aea6d90-merged.mount: Deactivated successfully.
Jan 21 08:45:01 np0005590528 podman[79756]: 2026-01-21 13:45:01.383903953 +0000 UTC m=+0.163736459 container remove 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 08:45:01 np0005590528 systemd[1]: libpod-conmon-40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23.scope: Deactivated successfully.
Jan 21 08:45:01 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:01 np0005590528 python3[79800]: ansible-ansible.legacy.async_status Invoked with jid=j808190305125.79262 mode=status _async_dir=/root/.ansible_async
Jan 21 08:45:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:01 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:01 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 21 08:45:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 08:45:01 np0005590528 ceph-mon[75031]: Deploying daemon crash.compute-0 on compute-0
Jan 21 08:45:01 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:01 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:01 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:01 np0005590528 python3[79899]: ansible-ansible.legacy.async_status Invoked with jid=j808190305125.79262 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 08:45:01 np0005590528 systemd[1]: Starting Ceph crash.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:45:02 np0005590528 podman[79988]: 2026-01-21 13:45:02.195588952 +0000 UTC m=+0.048049287 container create 52571d403aeaf640f6890095a6ccf83602c1d467929b5f9e357a86778560fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 08:45:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d180385bcc2cdca3f04429398a0e604c6f7b1a5bb279ebb760cea64fd8c2cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d180385bcc2cdca3f04429398a0e604c6f7b1a5bb279ebb760cea64fd8c2cf/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d180385bcc2cdca3f04429398a0e604c6f7b1a5bb279ebb760cea64fd8c2cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d180385bcc2cdca3f04429398a0e604c6f7b1a5bb279ebb760cea64fd8c2cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:02 np0005590528 podman[79988]: 2026-01-21 13:45:02.178526178 +0000 UTC m=+0.030986553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:02 np0005590528 podman[79988]: 2026-01-21 13:45:02.287429313 +0000 UTC m=+0.139889658 container init 52571d403aeaf640f6890095a6ccf83602c1d467929b5f9e357a86778560fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:02 np0005590528 podman[79988]: 2026-01-21 13:45:02.295640781 +0000 UTC m=+0.148101116 container start 52571d403aeaf640f6890095a6ccf83602c1d467929b5f9e357a86778560fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 08:45:02 np0005590528 bash[79988]: 52571d403aeaf640f6890095a6ccf83602c1d467929b5f9e357a86778560fdbc
Jan 21 08:45:02 np0005590528 systemd[1]: Started Ceph crash.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:45:02 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 21 08:45:02 np0005590528 python3[80026]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:02 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev 85295e5b-4a25-407c-8383-f553a6b980c4 (Updating crash deployment (+1 -> 1))
Jan 21 08:45:02 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event 85295e5b-4a25-407c-8383-f553a6b980c4 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:02 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev fa78ad26-c6ea-4526-bd19-a2f94f2692d1 (Updating mgr deployment (+1 -> 2))
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.dxoawe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.dxoawe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dxoawe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr services"} : dispatch
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:02 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.dxoawe on compute-0
Jan 21 08:45:02 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.dxoawe on compute-0
Jan 21 08:45:02 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.444+0000 7f2f44a0b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 21 08:45:02 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.444+0000 7f2f44a0b640 -1 AuthRegistry(0x7f2f40052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 21 08:45:02 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.445+0000 7f2f44a0b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 21 08:45:02 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.445+0000 7f2f44a0b640 -1 AuthRegistry(0x7f2f44a09fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 21 08:45:02 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.446+0000 7f2f3e575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 21 08:45:02 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.446+0000 7f2f44a0b640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 21 08:45:02 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 21 08:45:02 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 21 08:45:02 np0005590528 python3[80136]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:02 np0005590528 podman[80162]: 2026-01-21 13:45:02.871772276 +0000 UTC m=+0.038307997 container create 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:02 np0005590528 systemd[1]: Started libpod-conmon-3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52.scope.
Jan 21 08:45:02 np0005590528 podman[80176]: 2026-01-21 13:45:02.90842578 +0000 UTC m=+0.034737717 container create 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 08:45:02 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:02 np0005590528 systemd[1]: Started libpod-conmon-345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a.scope.
Jan 21 08:45:02 np0005590528 podman[80162]: 2026-01-21 13:45:02.85447777 +0000 UTC m=+0.021013511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:02 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:02 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:02 np0005590528 podman[80162]: 2026-01-21 13:45:02.95885699 +0000 UTC m=+0.125392731 container init 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 08:45:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931664503be5c68e5a065f5cf8d770aabbe13eb8634bcb721045cb550ec5f96c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931664503be5c68e5a065f5cf8d770aabbe13eb8634bcb721045cb550ec5f96c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931664503be5c68e5a065f5cf8d770aabbe13eb8634bcb721045cb550ec5f96c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:02 np0005590528 podman[80162]: 2026-01-21 13:45:02.970614558 +0000 UTC m=+0.137150299 container start 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 08:45:02 np0005590528 podman[80162]: 2026-01-21 13:45:02.974899349 +0000 UTC m=+0.141435070 container attach 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:02 np0005590528 affectionate_turing[80191]: 167 167
Jan 21 08:45:02 np0005590528 systemd[1]: libpod-3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52.scope: Deactivated successfully.
Jan 21 08:45:02 np0005590528 podman[80176]: 2026-01-21 13:45:02.982239784 +0000 UTC m=+0.108551741 container init 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 08:45:02 np0005590528 podman[80162]: 2026-01-21 13:45:02.983185278 +0000 UTC m=+0.149721029 container died 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:02 np0005590528 podman[80176]: 2026-01-21 13:45:02.987913885 +0000 UTC m=+0.114225822 container start 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:02 np0005590528 podman[80176]: 2026-01-21 13:45:02.894257087 +0000 UTC m=+0.020569054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:02 np0005590528 podman[80176]: 2026-01-21 13:45:02.995123638 +0000 UTC m=+0.121435575 container attach 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:03 np0005590528 systemd[1]: var-lib-containers-storage-overlay-f9e0fced8f281c2a1d9ed10b1b999592f4951067d92328eaf2282cdb43f7be97-merged.mount: Deactivated successfully.
Jan 21 08:45:03 np0005590528 podman[80162]: 2026-01-21 13:45:03.024923724 +0000 UTC m=+0.191459445 container remove 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 08:45:03 np0005590528 systemd[1]: libpod-conmon-3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52.scope: Deactivated successfully.
Jan 21 08:45:03 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:03 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:03 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.dxoawe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 21 08:45:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dxoawe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 08:45:03 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 08:45:03 np0005590528 gallant_khorana[80197]: 
Jan 21 08:45:03 np0005590528 gallant_khorana[80197]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 08:45:03 np0005590528 podman[80176]: 2026-01-21 13:45:03.430021788 +0000 UTC m=+0.556333725 container died 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:03 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:03 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:03 np0005590528 systemd[1]: libpod-345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a.scope: Deactivated successfully.
Jan 21 08:45:03 np0005590528 systemd[1]: Starting Ceph mgr.compute-0.dxoawe for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:45:03 np0005590528 systemd[1]: var-lib-containers-storage-overlay-931664503be5c68e5a065f5cf8d770aabbe13eb8634bcb721045cb550ec5f96c-merged.mount: Deactivated successfully.
Jan 21 08:45:03 np0005590528 podman[80176]: 2026-01-21 13:45:03.681961905 +0000 UTC m=+0.808273842 container remove 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 21 08:45:03 np0005590528 systemd[1]: libpod-conmon-345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a.scope: Deactivated successfully.
Jan 21 08:45:03 np0005590528 podman[80376]: 2026-01-21 13:45:03.933971623 +0000 UTC m=+0.049290374 container create 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 08:45:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb232cf63afa8d1088f0543e88d69ab50488f620288f21a2438eb9a414b63861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb232cf63afa8d1088f0543e88d69ab50488f620288f21a2438eb9a414b63861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb232cf63afa8d1088f0543e88d69ab50488f620288f21a2438eb9a414b63861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb232cf63afa8d1088f0543e88d69ab50488f620288f21a2438eb9a414b63861/merged/var/lib/ceph/mgr/ceph-compute-0.dxoawe supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:04 np0005590528 podman[80376]: 2026-01-21 13:45:04.001104932 +0000 UTC m=+0.116423723 container init 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 08:45:04 np0005590528 podman[80376]: 2026-01-21 13:45:03.911891808 +0000 UTC m=+0.027210589 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:04 np0005590528 podman[80376]: 2026-01-21 13:45:04.006296336 +0000 UTC m=+0.121615087 container start 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:04 np0005590528 bash[80376]: 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c
Jan 21 08:45:04 np0005590528 systemd[1]: Started Ceph mgr.compute-0.dxoawe for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:45:04 np0005590528 ceph-mgr[80421]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 08:45:04 np0005590528 ceph-mgr[80421]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 21 08:45:04 np0005590528 ceph-mgr[80421]: pidfile_write: ignore empty --pid-file
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:04 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'alerts'
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:04 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev fa78ad26-c6ea-4526-bd19-a2f94f2692d1 (Updating mgr deployment (+1 -> 2))
Jan 21 08:45:04 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event fa78ad26-c6ea-4526-bd19-a2f94f2692d1 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:04 np0005590528 python3[80419]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:04 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'balancer'
Jan 21 08:45:04 np0005590528 podman[80457]: 2026-01-21 13:45:04.198119955 +0000 UTC m=+0.052615572 container create 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Jan 21 08:45:04 np0005590528 systemd[1]: Started libpod-conmon-532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e.scope.
Jan 21 08:45:04 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a57b30e252a416daa513c9d73a71b29b1878f4b8b46e7f0db6d624232aa8eda/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a57b30e252a416daa513c9d73a71b29b1878f4b8b46e7f0db6d624232aa8eda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a57b30e252a416daa513c9d73a71b29b1878f4b8b46e7f0db6d624232aa8eda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:04 np0005590528 podman[80457]: 2026-01-21 13:45:04.176842601 +0000 UTC m=+0.031338268 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:04 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'cephadm'
Jan 21 08:45:04 np0005590528 podman[80457]: 2026-01-21 13:45:04.293589268 +0000 UTC m=+0.148084915 container init 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 08:45:04 np0005590528 podman[80457]: 2026-01-21 13:45:04.299375301 +0000 UTC m=+0.153870928 container start 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 08:45:04 np0005590528 podman[80457]: 2026-01-21 13:45:04.303135965 +0000 UTC m=+0.157631592 container attach 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: Deploying daemon mgr.compute-0.dxoawe on compute-0
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 21 08:45:04 np0005590528 podman[80603]: 2026-01-21 13:45:04.74831689 +0000 UTC m=+0.054665911 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 08:45:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3509207885' entity='client.admin' 
Jan 21 08:45:04 np0005590528 systemd[1]: libpod-532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e.scope: Deactivated successfully.
Jan 21 08:45:04 np0005590528 podman[80457]: 2026-01-21 13:45:04.77414729 +0000 UTC m=+0.628642937 container died 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 21 08:45:04 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0a57b30e252a416daa513c9d73a71b29b1878f4b8b46e7f0db6d624232aa8eda-merged.mount: Deactivated successfully.
Jan 21 08:45:04 np0005590528 podman[80457]: 2026-01-21 13:45:04.82389705 +0000 UTC m=+0.678392697 container remove 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:04 np0005590528 systemd[1]: libpod-conmon-532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e.scope: Deactivated successfully.
Jan 21 08:45:04 np0005590528 podman[80603]: 2026-01-21 13:45:04.849860691 +0000 UTC m=+0.156209692 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:04 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:05 np0005590528 ansible-async_wrapper.py[79322]: Done in kid B.
Jan 21 08:45:05 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'crash'
Jan 21 08:45:05 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'dashboard'
Jan 21 08:45:05 np0005590528 python3[80720]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:05 np0005590528 podman[80749]: 2026-01-21 13:45:05.223701919 +0000 UTC m=+0.046448425 container create ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:05 np0005590528 systemd[1]: Started libpod-conmon-ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db.scope.
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:05 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d32c4e786e205b83806d875665df0700f69dd1cd9cbd16bfd9279406bc9c96f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d32c4e786e205b83806d875665df0700f69dd1cd9cbd16bfd9279406bc9c96f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d32c4e786e205b83806d875665df0700f69dd1cd9cbd16bfd9279406bc9c96f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:05 np0005590528 podman[80749]: 2026-01-21 13:45:05.205410817 +0000 UTC m=+0.028157343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:05 np0005590528 podman[80749]: 2026-01-21 13:45:05.31203083 +0000 UTC m=+0.134777386 container init ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 08:45:05 np0005590528 podman[80749]: 2026-01-21 13:45:05.31969874 +0000 UTC m=+0.142445246 container start ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:05 np0005590528 podman[80749]: 2026-01-21 13:45:05.323596105 +0000 UTC m=+0.146342661 container attach ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:05 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 08:45:05 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:05 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 08:45:05 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/3509207885' entity='client.admin' 
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 08:45:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/259544767' entity='client.admin' 
Jan 21 08:45:05 np0005590528 podman[80892]: 2026-01-21 13:45:05.715140836 +0000 UTC m=+0.039246592 container create 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 08:45:05 np0005590528 podman[80749]: 2026-01-21 13:45:05.730984811 +0000 UTC m=+0.553731317 container died ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 08:45:05 np0005590528 systemd[1]: libpod-ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db.scope: Deactivated successfully.
Jan 21 08:45:05 np0005590528 systemd[1]: Started libpod-conmon-5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c.scope.
Jan 21 08:45:05 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3d32c4e786e205b83806d875665df0700f69dd1cd9cbd16bfd9279406bc9c96f-merged.mount: Deactivated successfully.
Jan 21 08:45:05 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:05 np0005590528 podman[80749]: 2026-01-21 13:45:05.768971324 +0000 UTC m=+0.591717830 container remove ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 08:45:05 np0005590528 systemd[1]: libpod-conmon-ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db.scope: Deactivated successfully.
Jan 21 08:45:05 np0005590528 podman[80892]: 2026-01-21 13:45:05.783918958 +0000 UTC m=+0.108024764 container init 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 08:45:05 np0005590528 podman[80892]: 2026-01-21 13:45:05.790201118 +0000 UTC m=+0.114306874 container start 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:05 np0005590528 podman[80892]: 2026-01-21 13:45:05.794460809 +0000 UTC m=+0.118566615 container attach 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 08:45:05 np0005590528 crazy_khorana[80917]: 167 167
Jan 21 08:45:05 np0005590528 podman[80892]: 2026-01-21 13:45:05.79600515 +0000 UTC m=+0.120110946 container died 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:05 np0005590528 systemd[1]: libpod-5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c.scope: Deactivated successfully.
Jan 21 08:45:05 np0005590528 podman[80892]: 2026-01-21 13:45:05.700449166 +0000 UTC m=+0.024554942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:05 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1b97f792012f04bae68a1b795cc58d4761c58e4af5bd1db096f149f44ee69589-merged.mount: Deactivated successfully.
Jan 21 08:45:05 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'devicehealth'
Jan 21 08:45:05 np0005590528 podman[80892]: 2026-01-21 13:45:05.84152116 +0000 UTC m=+0.165626926 container remove 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:05 np0005590528 systemd[1]: libpod-conmon-5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c.scope: Deactivated successfully.
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:05 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:05 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.tnwklj (unknown last config time)...
Jan 21 08:45:05 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.tnwklj (unknown last config time)...
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.tnwklj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.tnwklj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr services"} : dispatch
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:05 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.tnwklj on compute-0
Jan 21 08:45:05 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.tnwklj on compute-0
Jan 21 08:45:05 np0005590528 ceph-mgr[75322]: [progress INFO root] Writing back 2 completed events
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 08:45:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:06 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe[80415]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 08:45:06 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe[80415]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 08:45:06 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe[80415]:  from numpy import show_config as show_numpy_config
Jan 21 08:45:06 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'influx'
Jan 21 08:45:06 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'insights'
Jan 21 08:45:06 np0005590528 python3[80997]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:06 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'iostat'
Jan 21 08:45:06 np0005590528 podman[81016]: 2026-01-21 13:45:06.252374197 +0000 UTC m=+0.066067914 container create ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 08:45:06 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'k8sevents'
Jan 21 08:45:06 np0005590528 systemd[1]: Started libpod-conmon-ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6.scope.
Jan 21 08:45:06 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:06 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5794805674bce61625bca5b2334d3a197188839c30b8b4b9b72922e850dd3c5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:06 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5794805674bce61625bca5b2334d3a197188839c30b8b4b9b72922e850dd3c5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:06 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5794805674bce61625bca5b2334d3a197188839c30b8b4b9b72922e850dd3c5b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:06 np0005590528 podman[81016]: 2026-01-21 13:45:06.230869089 +0000 UTC m=+0.044562826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:06 np0005590528 podman[81016]: 2026-01-21 13:45:06.330058505 +0000 UTC m=+0.143752242 container init ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:06 np0005590528 podman[81016]: 2026-01-21 13:45:06.337697745 +0000 UTC m=+0.151391482 container start ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:06 np0005590528 podman[81016]: 2026-01-21 13:45:06.353591772 +0000 UTC m=+0.167285509 container attach ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 08:45:06 np0005590528 podman[81049]: 2026-01-21 13:45:06.393032585 +0000 UTC m=+0.039093650 container create 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 08:45:06 np0005590528 systemd[1]: Started libpod-conmon-7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01.scope.
Jan 21 08:45:06 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:06 np0005590528 podman[81049]: 2026-01-21 13:45:06.376745033 +0000 UTC m=+0.022806128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:06 np0005590528 podman[81049]: 2026-01-21 13:45:06.478918402 +0000 UTC m=+0.124979497 container init 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 08:45:06 np0005590528 podman[81049]: 2026-01-21 13:45:06.484008104 +0000 UTC m=+0.130069169 container start 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 21 08:45:06 np0005590528 podman[81049]: 2026-01-21 13:45:06.487344032 +0000 UTC m=+0.133405127 container attach 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:06 np0005590528 frosty_hawking[81065]: 167 167
Jan 21 08:45:06 np0005590528 systemd[1]: libpod-7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01.scope: Deactivated successfully.
Jan 21 08:45:06 np0005590528 podman[81089]: 2026-01-21 13:45:06.536863529 +0000 UTC m=+0.030871972 container died 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:06 np0005590528 systemd[1]: var-lib-containers-storage-overlay-930f334f21f91547f8d4d1b218dd88a9ad229b1756f79b08ea7f510442fff92e-merged.mount: Deactivated successfully.
Jan 21 08:45:06 np0005590528 podman[81089]: 2026-01-21 13:45:06.578744747 +0000 UTC m=+0.072753110 container remove 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:06 np0005590528 systemd[1]: libpod-conmon-7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01.scope: Deactivated successfully.
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:06 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'localpool'
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/259544767' entity='client.admin' 
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: Reconfiguring mgr.compute-0.tnwklj (unknown last config time)...
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.tnwklj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: Reconfiguring daemon mgr.compute-0.tnwklj on compute-0
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 21 08:45:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1919572567' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 21 08:45:06 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 08:45:06 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:06 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'mirroring'
Jan 21 08:45:07 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'nfs'
Jan 21 08:45:07 np0005590528 podman[81200]: 2026-01-21 13:45:07.19250701 +0000 UTC m=+0.085758365 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 08:45:07 np0005590528 podman[81200]: 2026-01-21 13:45:07.313735481 +0000 UTC m=+0.206986806 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:07 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'orchestrator'
Jan 21 08:45:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:07 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 08:45:07 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'osd_support'
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1919572567' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1919572567' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 21 08:45:07 np0005590528 relaxed_edison[81043]: set require_min_compat_client to mimic
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:45:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:07 np0005590528 systemd[1]: libpod-ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6.scope: Deactivated successfully.
Jan 21 08:45:07 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 08:45:07 np0005590528 podman[81016]: 2026-01-21 13:45:07.749839768 +0000 UTC m=+1.563533515 container died ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 08:45:07 np0005590528 systemd[1]: var-lib-containers-storage-overlay-5794805674bce61625bca5b2334d3a197188839c30b8b4b9b72922e850dd3c5b-merged.mount: Deactivated successfully.
Jan 21 08:45:07 np0005590528 podman[81016]: 2026-01-21 13:45:07.80320972 +0000 UTC m=+1.616903437 container remove ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 08:45:07 np0005590528 systemd[1]: libpod-conmon-ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6.scope: Deactivated successfully.
Jan 21 08:45:07 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'progress'
Jan 21 08:45:07 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'prometheus'
Jan 21 08:45:08 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'rbd_support'
Jan 21 08:45:08 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'rgw'
Jan 21 08:45:08 np0005590528 python3[81375]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:08 np0005590528 podman[81376]: 2026-01-21 13:45:08.490513223 +0000 UTC m=+0.042587448 container create 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 21 08:45:08 np0005590528 systemd[1]: Started libpod-conmon-81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e.scope.
Jan 21 08:45:08 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6a5f119a51460542715af86b25ed412f143e272f771fc237316efe4490110f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6a5f119a51460542715af86b25ed412f143e272f771fc237316efe4490110f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6a5f119a51460542715af86b25ed412f143e272f771fc237316efe4490110f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:08 np0005590528 podman[81376]: 2026-01-21 13:45:08.468271236 +0000 UTC m=+0.020345461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:08 np0005590528 podman[81376]: 2026-01-21 13:45:08.56727286 +0000 UTC m=+0.119347105 container init 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 08:45:08 np0005590528 podman[81376]: 2026-01-21 13:45:08.574260229 +0000 UTC m=+0.126334494 container start 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 08:45:08 np0005590528 podman[81376]: 2026-01-21 13:45:08.578811224 +0000 UTC m=+0.130885489 container attach 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 08:45:08 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'rook'
Jan 21 08:45:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:45:08 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1919572567' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 21 08:45:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:08 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:45:09 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'selftest'
Jan 21 08:45:09 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'smb'
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Added host compute-0
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev b4f89928-3bed-4eb1-adff-3f77fb354c0b (Updating mgr deployment (-1 -> 1))
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.dxoawe from compute-0 -- ports [8765]
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.dxoawe from compute-0 -- ports [8765]
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 keen_shannon[81391]: Added host 'compute-0' with addr '192.168.122.100'
Jan 21 08:45:09 np0005590528 keen_shannon[81391]: Scheduled mon update...
Jan 21 08:45:09 np0005590528 keen_shannon[81391]: Scheduled mgr update...
Jan 21 08:45:09 np0005590528 keen_shannon[81391]: Scheduled osd.default_drive_group update...
Jan 21 08:45:09 np0005590528 systemd[1]: libpod-81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e.scope: Deactivated successfully.
Jan 21 08:45:09 np0005590528 podman[81376]: 2026-01-21 13:45:09.485155706 +0000 UTC m=+1.037229971 container died 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 08:45:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:09 np0005590528 systemd[1]: var-lib-containers-storage-overlay-fb6a5f119a51460542715af86b25ed412f143e272f771fc237316efe4490110f-merged.mount: Deactivated successfully.
Jan 21 08:45:09 np0005590528 podman[81376]: 2026-01-21 13:45:09.538276875 +0000 UTC m=+1.090351140 container remove 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 08:45:09 np0005590528 systemd[1]: libpod-conmon-81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e.scope: Deactivated successfully.
Jan 21 08:45:09 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'snap_schedule'
Jan 21 08:45:09 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'stats'
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:09 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'status'
Jan 21 08:45:09 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'telegraf'
Jan 21 08:45:09 np0005590528 systemd[1]: Stopping Ceph mgr.compute-0.dxoawe for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:45:09 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'telemetry'
Jan 21 08:45:09 np0005590528 python3[81584]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:10 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 08:45:10 np0005590528 podman[81610]: 2026-01-21 13:45:10.021969041 +0000 UTC m=+0.025180851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:10 np0005590528 ceph-mgr[80421]: mgr[py] Loading python module 'volumes'
Jan 21 08:45:10 np0005590528 podman[81610]: 2026-01-21 13:45:10.459084092 +0000 UTC m=+0.462295892 container create 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:10 np0005590528 systemd[1]: Started libpod-conmon-7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882.scope.
Jan 21 08:45:10 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:10 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : Standby manager daemon compute-0.dxoawe started
Jan 21 08:45:10 np0005590528 ceph-mgr[80421]: ms_deliver_dispatch: unhandled message 0x55e988a70000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 08:45:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed217ad443830204a63fad09581f268b53ec2029a2dea438568c478b5437de63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:10 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from mgr.compute-0.dxoawe 192.168.122.100:0/2133553605; not ready for session (expect reconnect)
Jan 21 08:45:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed217ad443830204a63fad09581f268b53ec2029a2dea438568c478b5437de63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed217ad443830204a63fad09581f268b53ec2029a2dea438568c478b5437de63/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:10 np0005590528 podman[81610]: 2026-01-21 13:45:10.651204195 +0000 UTC m=+0.654416035 container init 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 08:45:10 np0005590528 podman[81610]: 2026-01-21 13:45:10.663549001 +0000 UTC m=+0.666760771 container start 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 08:45:10 np0005590528 podman[81610]: 2026-01-21 13:45:10.679776623 +0000 UTC m=+0.682988423 container attach 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 08:45:10 np0005590528 ceph-mon[75031]: Added host compute-0
Jan 21 08:45:10 np0005590528 ceph-mon[75031]: Saving service mon spec with placement compute-0
Jan 21 08:45:10 np0005590528 ceph-mon[75031]: Saving service mgr spec with placement compute-0
Jan 21 08:45:10 np0005590528 ceph-mon[75031]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 08:45:10 np0005590528 ceph-mon[75031]: Saving service osd.default_drive_group spec with placement compute-0
Jan 21 08:45:10 np0005590528 ceph-mon[75031]: Removing daemon mgr.compute-0.dxoawe from compute-0 -- ports [8765]
Jan 21 08:45:10 np0005590528 podman[81621]: 2026-01-21 13:45:10.804390343 +0000 UTC m=+0.772540712 container stop 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:10 np0005590528 podman[81621]: 2026-01-21 13:45:10.836810015 +0000 UTC m=+0.804960384 container died 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:45:10 np0005590528 systemd[1]: var-lib-containers-storage-overlay-eb232cf63afa8d1088f0543e88d69ab50488f620288f21a2438eb9a414b63861-merged.mount: Deactivated successfully.
Jan 21 08:45:10 np0005590528 podman[81621]: 2026-01-21 13:45:10.887488319 +0000 UTC m=+0.855638658 container remove 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:10 np0005590528 bash[81621]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe
Jan 21 08:45:10 np0005590528 systemd[1]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mgr.compute-0.dxoawe.service: Main process exited, code=exited, status=143/n/a
Jan 21 08:45:10 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:45:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:45:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:45:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:45:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:45:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:45:11 np0005590528 systemd[1]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mgr.compute-0.dxoawe.service: Failed with result 'exit-code'.
Jan 21 08:45:11 np0005590528 systemd[1]: Stopped Ceph mgr.compute-0.dxoawe for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:45:11 np0005590528 systemd[1]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mgr.compute-0.dxoawe.service: Consumed 7.548s CPU time, 415.3M memory peak, read 0B from disk, written 155.5K to disk.
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:11 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:11 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:11 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262478170' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 08:45:11 np0005590528 lucid_heyrovsky[81638]: 
Jan 21 08:45:11 np0005590528 lucid_heyrovsky[81638]: {"fsid":"2f0e9cad-f0a3-5869-9cc3-8d84d071866a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":50,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-21T13:44:18:859596+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-21T13:44:18.861719+0000","services":{}},"progress_events":{"b4f89928-3bed-4eb1-adff-3f77fb354c0b":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 21 08:45:11 np0005590528 podman[81610]: 2026-01-21 13:45:11.306242282 +0000 UTC m=+1.309454072 container died 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 08:45:11 np0005590528 systemd[1]: libpod-7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882.scope: Deactivated successfully.
Jan 21 08:45:11 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ed217ad443830204a63fad09581f268b53ec2029a2dea438568c478b5437de63-merged.mount: Deactivated successfully.
Jan 21 08:45:11 np0005590528 podman[81610]: 2026-01-21 13:45:11.397135004 +0000 UTC m=+1.400346774 container remove 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 08:45:11 np0005590528 systemd[1]: libpod-conmon-7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882.scope: Deactivated successfully.
Jan 21 08:45:11 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.dxoawe
Jan 21 08:45:11 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.dxoawe
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.dxoawe"} v 0)
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.dxoawe"} : dispatch
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.dxoawe"}]': finished
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:11 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev b4f89928-3bed-4eb1-adff-3f77fb354c0b (Updating mgr deployment (-1 -> 1))
Jan 21 08:45:11 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event b4f89928-3bed-4eb1-adff-3f77fb354c0b (Updating mgr deployment (-1 -> 1)) in 2 seconds
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.tnwklj(active, since 32s), standbys: compute-0.dxoawe
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dxoawe", "id": "compute-0.dxoawe"} v 0)
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr metadata", "who": "compute-0.dxoawe", "id": "compute-0.dxoawe"} : dispatch
Jan 21 08:45:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.dxoawe"} : dispatch
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.dxoawe"}]': finished
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 podman[81889]: 2026-01-21 13:45:12.088054898 +0000 UTC m=+0.058343356 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 08:45:12 np0005590528 podman[81889]: 2026-01-21 13:45:12.221898027 +0000 UTC m=+0.192186445 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: Removing key for mgr.compute-0.dxoawe
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:45:12 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:13 np0005590528 podman[82047]: 2026-01-21 13:45:13.056872134 +0000 UTC m=+0.052677651 container create f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:13 np0005590528 systemd[1]: Started libpod-conmon-f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152.scope.
Jan 21 08:45:13 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:13 np0005590528 podman[82047]: 2026-01-21 13:45:13.127526972 +0000 UTC m=+0.123332469 container init f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:13 np0005590528 podman[82047]: 2026-01-21 13:45:13.136099927 +0000 UTC m=+0.131905424 container start f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:13 np0005590528 podman[82047]: 2026-01-21 13:45:13.040706287 +0000 UTC m=+0.036511784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:13 np0005590528 elastic_rubin[82063]: 167 167
Jan 21 08:45:13 np0005590528 podman[82047]: 2026-01-21 13:45:13.140179354 +0000 UTC m=+0.135984861 container attach f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 08:45:13 np0005590528 systemd[1]: libpod-f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152.scope: Deactivated successfully.
Jan 21 08:45:13 np0005590528 podman[82047]: 2026-01-21 13:45:13.140756589 +0000 UTC m=+0.136562076 container died f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 08:45:13 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6e37374a62fb66e29dc30746a935c02ca375abe1e582da0ea051cd16bf2a89bb-merged.mount: Deactivated successfully.
Jan 21 08:45:13 np0005590528 podman[82047]: 2026-01-21 13:45:13.184535515 +0000 UTC m=+0.180340992 container remove f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:13 np0005590528 systemd[1]: libpod-conmon-f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152.scope: Deactivated successfully.
Jan 21 08:45:13 np0005590528 podman[82086]: 2026-01-21 13:45:13.321099278 +0000 UTC m=+0.036135704 container create a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:13 np0005590528 systemd[1]: Started libpod-conmon-a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879.scope.
Jan 21 08:45:13 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:13 np0005590528 podman[82086]: 2026-01-21 13:45:13.306141541 +0000 UTC m=+0.021177987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:13 np0005590528 podman[82086]: 2026-01-21 13:45:13.432348698 +0000 UTC m=+0.147385154 container init a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:13 np0005590528 podman[82086]: 2026-01-21 13:45:13.440758079 +0000 UTC m=+0.155794545 container start a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 08:45:13 np0005590528 podman[82086]: 2026-01-21 13:45:13.444505368 +0000 UTC m=+0.159541904 container attach a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 08:45:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:14 np0005590528 epic_jones[82102]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:45:14 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:14 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:14 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new bb69e93d-312d-404f-89ad-65c71069da0f
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "bb69e93d-312d-404f-89ad-65c71069da0f"} v 0)
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/202145632' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "bb69e93d-312d-404f-89ad-65c71069da0f"} : dispatch
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/202145632' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb69e93d-312d-404f-89ad-65c71069da0f"}]': finished
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:14 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/202145632' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "bb69e93d-312d-404f-89ad-65c71069da0f"} : dispatch
Jan 21 08:45:14 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/202145632' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb69e93d-312d-404f-89ad-65c71069da0f"}]': finished
Jan 21 08:45:14 np0005590528 epic_jones[82102]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 21 08:45:14 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 21 08:45:14 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 08:45:14 np0005590528 epic_jones[82102]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:14 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 21 08:45:14 np0005590528 lvm[82194]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:45:14 np0005590528 lvm[82194]: VG ceph_vg0 finished
Jan 21 08:45:14 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 21 08:45:15 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887925425' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 21 08:45:15 np0005590528 epic_jones[82102]: stderr: got monmap epoch 1
Jan 21 08:45:15 np0005590528 epic_jones[82102]: --> Creating keyring file for osd.0
Jan 21 08:45:15 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 21 08:45:15 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 21 08:45:15 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid bb69e93d-312d-404f-89ad-65c71069da0f --setuser ceph --setgroup ceph
Jan 21 08:45:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:15 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 21 08:45:15 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 21 08:45:15 np0005590528 ceph-mgr[75322]: [progress INFO root] Writing back 3 completed events
Jan 21 08:45:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 08:45:15 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:16 np0005590528 epic_jones[82102]: stderr: 2026-01-21T13:45:15.527+0000 7fb2261bf8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 21 08:45:16 np0005590528 epic_jones[82102]: stderr: 2026-01-21T13:45:15.550+0000 7fb2261bf8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 21 08:45:16 np0005590528 epic_jones[82102]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 21 08:45:16 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 08:45:16 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 21 08:45:16 np0005590528 epic_jones[82102]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:16 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:16 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 08:45:16 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 08:45:16 np0005590528 epic_jones[82102]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 21 08:45:16 np0005590528 epic_jones[82102]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 21 08:45:16 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:16 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:16 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e72716bc-fd8c-40ef-ada4-83584d595d05
Jan 21 08:45:16 np0005590528 ceph-mon[75031]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 21 08:45:16 np0005590528 ceph-mon[75031]: Cluster is now healthy
Jan 21 08:45:16 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:16 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e72716bc-fd8c-40ef-ada4-83584d595d05"} v 0)
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/997373637' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e72716bc-fd8c-40ef-ada4-83584d595d05"} : dispatch
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/997373637' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e72716bc-fd8c-40ef-ada4-83584d595d05"}]': finished
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:17 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 08:45:17 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:17 np0005590528 lvm[83141]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:45:17 np0005590528 lvm[83141]: VG ceph_vg1 finished
Jan 21 08:45:17 np0005590528 epic_jones[82102]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 21 08:45:17 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 21 08:45:17 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 21 08:45:17 np0005590528 epic_jones[82102]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:17 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 21 08:45:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/997373637' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e72716bc-fd8c-40ef-ada4-83584d595d05"} : dispatch
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/997373637' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e72716bc-fd8c-40ef-ada4-83584d595d05"}]': finished
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 21 08:45:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830211096' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 21 08:45:17 np0005590528 epic_jones[82102]: stderr: got monmap epoch 1
Jan 21 08:45:17 np0005590528 epic_jones[82102]: --> Creating keyring file for osd.1
Jan 21 08:45:17 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 21 08:45:17 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 21 08:45:17 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid e72716bc-fd8c-40ef-ada4-83584d595d05 --setuser ceph --setgroup ceph
Jan 21 08:45:18 np0005590528 epic_jones[82102]: stderr: 2026-01-21T13:45:17.931+0000 7f321848f8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 21 08:45:18 np0005590528 epic_jones[82102]: stderr: 2026-01-21T13:45:17.952+0000 7f321848f8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 21 08:45:18 np0005590528 epic_jones[82102]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 21 08:45:18 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:18 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 08:45:18 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 08:45:19 np0005590528 epic_jones[82102]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 21 08:45:19 np0005590528 epic_jones[82102]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8d905f10-e78d-4894-96b3-7b33a725e1b7
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8d905f10-e78d-4894-96b3-7b33a725e1b7"} v 0)
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/259288800' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "8d905f10-e78d-4894-96b3-7b33a725e1b7"} : dispatch
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/259288800' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8d905f10-e78d-4894-96b3-7b33a725e1b7"}]': finished
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:19 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:19 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:19 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:19 np0005590528 lvm[84088]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:45:19 np0005590528 lvm[84088]: VG ceph_vg2 finished
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:19 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/259288800' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "8d905f10-e78d-4894-96b3-7b33a725e1b7"} : dispatch
Jan 21 08:45:19 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/259288800' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8d905f10-e78d-4894-96b3-7b33a725e1b7"}]': finished
Jan 21 08:45:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 21 08:45:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103963103' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 21 08:45:20 np0005590528 epic_jones[82102]: stderr: got monmap epoch 1
Jan 21 08:45:20 np0005590528 epic_jones[82102]: --> Creating keyring file for osd.2
Jan 21 08:45:20 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 21 08:45:20 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 21 08:45:20 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 8d905f10-e78d-4894-96b3-7b33a725e1b7 --setuser ceph --setgroup ceph
Jan 21 08:45:20 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:21 np0005590528 epic_jones[82102]: stderr: 2026-01-21T13:45:20.357+0000 7f5fd2dcc8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 21 08:45:21 np0005590528 epic_jones[82102]: stderr: 2026-01-21T13:45:20.380+0000 7f5fd2dcc8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 21 08:45:21 np0005590528 epic_jones[82102]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 21 08:45:21 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 08:45:21 np0005590528 epic_jones[82102]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 21 08:45:21 np0005590528 epic_jones[82102]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:21 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:21 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 21 08:45:21 np0005590528 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 08:45:21 np0005590528 epic_jones[82102]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 21 08:45:21 np0005590528 epic_jones[82102]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 21 08:45:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:21 np0005590528 systemd[1]: libpod-a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879.scope: Deactivated successfully.
Jan 21 08:45:21 np0005590528 systemd[1]: libpod-a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879.scope: Consumed 6.459s CPU time.
Jan 21 08:45:21 np0005590528 podman[85003]: 2026-01-21 13:45:21.578370874 +0000 UTC m=+0.031879043 container died a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 08:45:21 np0005590528 systemd[1]: var-lib-containers-storage-overlay-8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac-merged.mount: Deactivated successfully.
Jan 21 08:45:21 np0005590528 podman[85003]: 2026-01-21 13:45:21.629435134 +0000 UTC m=+0.082943223 container remove a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 08:45:21 np0005590528 systemd[1]: libpod-conmon-a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879.scope: Deactivated successfully.
Jan 21 08:45:22 np0005590528 podman[85081]: 2026-01-21 13:45:22.107710166 +0000 UTC m=+0.054041743 container create 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:22 np0005590528 systemd[1]: Started libpod-conmon-1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12.scope.
Jan 21 08:45:22 np0005590528 podman[85081]: 2026-01-21 13:45:22.081887448 +0000 UTC m=+0.028219125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:22 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:22 np0005590528 podman[85081]: 2026-01-21 13:45:22.218507974 +0000 UTC m=+0.164839601 container init 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:22 np0005590528 podman[85081]: 2026-01-21 13:45:22.231080654 +0000 UTC m=+0.177412251 container start 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 08:45:22 np0005590528 podman[85081]: 2026-01-21 13:45:22.236056254 +0000 UTC m=+0.182387941 container attach 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 21 08:45:22 np0005590528 keen_stonebraker[85098]: 167 167
Jan 21 08:45:22 np0005590528 systemd[1]: libpod-1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12.scope: Deactivated successfully.
Jan 21 08:45:22 np0005590528 podman[85081]: 2026-01-21 13:45:22.238239785 +0000 UTC m=+0.184571412 container died 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 08:45:22 np0005590528 systemd[1]: var-lib-containers-storage-overlay-52e1114c46edaebe990d7c5a712660a9befb4a475dc9e26d83f9cecf20da17e7-merged.mount: Deactivated successfully.
Jan 21 08:45:22 np0005590528 podman[85081]: 2026-01-21 13:45:22.277060393 +0000 UTC m=+0.223391970 container remove 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 08:45:22 np0005590528 systemd[1]: libpod-conmon-1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12.scope: Deactivated successfully.
Jan 21 08:45:22 np0005590528 podman[85122]: 2026-01-21 13:45:22.488351034 +0000 UTC m=+0.052480456 container create c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 08:45:22 np0005590528 systemd[1]: Started libpod-conmon-c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f.scope.
Jan 21 08:45:22 np0005590528 podman[85122]: 2026-01-21 13:45:22.466735056 +0000 UTC m=+0.030864508 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:22 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a734413d9cba028c367f49b9027c9f538b514b01ff95945ea4c978a746e15fdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a734413d9cba028c367f49b9027c9f538b514b01ff95945ea4c978a746e15fdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a734413d9cba028c367f49b9027c9f538b514b01ff95945ea4c978a746e15fdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a734413d9cba028c367f49b9027c9f538b514b01ff95945ea4c978a746e15fdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:22 np0005590528 podman[85122]: 2026-01-21 13:45:22.601029876 +0000 UTC m=+0.165159378 container init c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 08:45:22 np0005590528 podman[85122]: 2026-01-21 13:45:22.618334 +0000 UTC m=+0.182463452 container start c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:22 np0005590528 podman[85122]: 2026-01-21 13:45:22.623083924 +0000 UTC m=+0.187213366 container attach c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:45:22 np0005590528 stoic_jang[85139]: {
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:    "0": [
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:        {
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "devices": [
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "/dev/loop3"
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            ],
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_name": "ceph_lv0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_size": "21470642176",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "name": "ceph_lv0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "tags": {
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.cluster_name": "ceph",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.crush_device_class": "",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.encrypted": "0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.objectstore": "bluestore",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.osd_id": "0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.type": "block",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.vdo": "0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.with_tpm": "0"
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            },
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "type": "block",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "vg_name": "ceph_vg0"
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:        }
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:    ],
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:    "1": [
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:        {
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "devices": [
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "/dev/loop4"
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            ],
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_name": "ceph_lv1",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_size": "21470642176",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "name": "ceph_lv1",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "tags": {
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.cluster_name": "ceph",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.crush_device_class": "",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.encrypted": "0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.objectstore": "bluestore",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.osd_id": "1",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.type": "block",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.vdo": "0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.with_tpm": "0"
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            },
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "type": "block",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "vg_name": "ceph_vg1"
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:        }
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:    ],
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:    "2": [
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:        {
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "devices": [
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "/dev/loop5"
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            ],
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_name": "ceph_lv2",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_size": "21470642176",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "name": "ceph_lv2",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "tags": {
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.cluster_name": "ceph",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.crush_device_class": "",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.encrypted": "0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.objectstore": "bluestore",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.osd_id": "2",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.type": "block",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.vdo": "0",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:                "ceph.with_tpm": "0"
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            },
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "type": "block",
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:            "vg_name": "ceph_vg2"
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:        }
Jan 21 08:45:22 np0005590528 stoic_jang[85139]:    ]
Jan 21 08:45:22 np0005590528 stoic_jang[85139]: }
Jan 21 08:45:22 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:22 np0005590528 systemd[1]: libpod-c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f.scope: Deactivated successfully.
Jan 21 08:45:22 np0005590528 podman[85122]: 2026-01-21 13:45:22.972772612 +0000 UTC m=+0.536902064 container died c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:23 np0005590528 systemd[1]: var-lib-containers-storage-overlay-a734413d9cba028c367f49b9027c9f538b514b01ff95945ea4c978a746e15fdc-merged.mount: Deactivated successfully.
Jan 21 08:45:23 np0005590528 podman[85122]: 2026-01-21 13:45:23.033066653 +0000 UTC m=+0.597196065 container remove c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:23 np0005590528 systemd[1]: libpod-conmon-c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f.scope: Deactivated successfully.
Jan 21 08:45:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 21 08:45:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 21 08:45:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:23 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 21 08:45:23 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 21 08:45:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:23 np0005590528 podman[85248]: 2026-01-21 13:45:23.694937461 +0000 UTC m=+0.046608704 container create 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:23 np0005590528 systemd[1]: Started libpod-conmon-20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666.scope.
Jan 21 08:45:23 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:23 np0005590528 podman[85248]: 2026-01-21 13:45:23.67435951 +0000 UTC m=+0.026030783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:23 np0005590528 podman[85248]: 2026-01-21 13:45:23.773579592 +0000 UTC m=+0.125250865 container init 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 08:45:23 np0005590528 podman[85248]: 2026-01-21 13:45:23.783818666 +0000 UTC m=+0.135489919 container start 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 08:45:23 np0005590528 zealous_lederberg[85265]: 167 167
Jan 21 08:45:23 np0005590528 systemd[1]: libpod-20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666.scope: Deactivated successfully.
Jan 21 08:45:23 np0005590528 podman[85248]: 2026-01-21 13:45:23.788645301 +0000 UTC m=+0.140316554 container attach 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:23 np0005590528 podman[85248]: 2026-01-21 13:45:23.790312282 +0000 UTC m=+0.141983545 container died 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:23 np0005590528 systemd[1]: var-lib-containers-storage-overlay-8817ace1e34d7e611acac82f335932dd736780e627865feb9faa37b60dde83e5-merged.mount: Deactivated successfully.
Jan 21 08:45:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 21 08:45:23 np0005590528 ceph-mon[75031]: Deploying daemon osd.0 on compute-0
Jan 21 08:45:23 np0005590528 podman[85248]: 2026-01-21 13:45:23.840457639 +0000 UTC m=+0.192128872 container remove 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:23 np0005590528 systemd[1]: libpod-conmon-20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666.scope: Deactivated successfully.
Jan 21 08:45:24 np0005590528 podman[85293]: 2026-01-21 13:45:24.116790945 +0000 UTC m=+0.041702508 container create 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:24 np0005590528 systemd[1]: Started libpod-conmon-08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237.scope.
Jan 21 08:45:24 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:24 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:24 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:24 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:24 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:24 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:24 np0005590528 podman[85293]: 2026-01-21 13:45:24.195960726 +0000 UTC m=+0.120872349 container init 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 08:45:24 np0005590528 podman[85293]: 2026-01-21 13:45:24.101701614 +0000 UTC m=+0.026613177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:24 np0005590528 podman[85293]: 2026-01-21 13:45:24.209518571 +0000 UTC m=+0.134430114 container start 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:24 np0005590528 podman[85293]: 2026-01-21 13:45:24.214049129 +0000 UTC m=+0.138960722 container attach 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:24 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test[85309]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 21 08:45:24 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test[85309]:                            [--no-systemd] [--no-tmpfs]
Jan 21 08:45:24 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test[85309]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 21 08:45:24 np0005590528 systemd[1]: libpod-08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237.scope: Deactivated successfully.
Jan 21 08:45:24 np0005590528 podman[85293]: 2026-01-21 13:45:24.407044821 +0000 UTC m=+0.331956364 container died 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 08:45:24 np0005590528 systemd[1]: var-lib-containers-storage-overlay-cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84-merged.mount: Deactivated successfully.
Jan 21 08:45:24 np0005590528 podman[85293]: 2026-01-21 13:45:24.450232744 +0000 UTC m=+0.375144287 container remove 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:24 np0005590528 systemd[1]: libpod-conmon-08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237.scope: Deactivated successfully.
Jan 21 08:45:24 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:24 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:24 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:24 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:25 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:25 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:25 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:25 np0005590528 systemd[1]: Starting Ceph osd.0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:45:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:25 np0005590528 podman[85468]: 2026-01-21 13:45:25.692628938 +0000 UTC m=+0.060925817 container create 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:25 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:25 np0005590528 podman[85468]: 2026-01-21 13:45:25.666503204 +0000 UTC m=+0.034800163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:25 np0005590528 podman[85468]: 2026-01-21 13:45:25.771547205 +0000 UTC m=+0.139844194 container init 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 08:45:25 np0005590528 podman[85468]: 2026-01-21 13:45:25.788504 +0000 UTC m=+0.156800879 container start 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:25 np0005590528 podman[85468]: 2026-01-21 13:45:25.792837403 +0000 UTC m=+0.161134362 container attach 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:25 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:25 np0005590528 bash[85468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:26 np0005590528 bash[85468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:26 np0005590528 lvm[85570]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:45:26 np0005590528 lvm[85569]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:45:26 np0005590528 lvm[85569]: VG ceph_vg1 finished
Jan 21 08:45:26 np0005590528 lvm[85570]: VG ceph_vg0 finished
Jan 21 08:45:26 np0005590528 lvm[85572]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:45:26 np0005590528 lvm[85572]: VG ceph_vg2 finished
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 08:45:26 np0005590528 bash[85468]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:26 np0005590528 bash[85468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:26 np0005590528 bash[85468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 08:45:26 np0005590528 bash[85468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 21 08:45:26 np0005590528 bash[85468]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:26 np0005590528 bash[85468]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:26 np0005590528 bash[85468]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 08:45:26 np0005590528 bash[85468]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 08:45:26 np0005590528 bash[85468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 08:45:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 21 08:45:26 np0005590528 bash[85468]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 21 08:45:26 np0005590528 systemd[1]: libpod-7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd.scope: Deactivated successfully.
Jan 21 08:45:26 np0005590528 podman[85468]: 2026-01-21 13:45:26.944179161 +0000 UTC m=+1.312476050 container died 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:26 np0005590528 systemd[1]: libpod-7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd.scope: Consumed 1.602s CPU time.
Jan 21 08:45:26 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:26 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724-merged.mount: Deactivated successfully.
Jan 21 08:45:26 np0005590528 podman[85468]: 2026-01-21 13:45:26.99013873 +0000 UTC m=+1.358435609 container remove 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 08:45:27 np0005590528 podman[85721]: 2026-01-21 13:45:27.166251639 +0000 UTC m=+0.039420743 container create 534fa4fe41482b2f0b6a4ea9687ef5d59a9f50942c275fca1e5f7b80f4698ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 08:45:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41e4d082bdd3b9695ce1c7daa59f5376fd2fbc82fc1283c2a9888a3bd6eb12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41e4d082bdd3b9695ce1c7daa59f5376fd2fbc82fc1283c2a9888a3bd6eb12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41e4d082bdd3b9695ce1c7daa59f5376fd2fbc82fc1283c2a9888a3bd6eb12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41e4d082bdd3b9695ce1c7daa59f5376fd2fbc82fc1283c2a9888a3bd6eb12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41e4d082bdd3b9695ce1c7daa59f5376fd2fbc82fc1283c2a9888a3bd6eb12/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:27 np0005590528 podman[85721]: 2026-01-21 13:45:27.147535291 +0000 UTC m=+0.020704385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:27 np0005590528 podman[85721]: 2026-01-21 13:45:27.248036983 +0000 UTC m=+0.121206067 container init 534fa4fe41482b2f0b6a4ea9687ef5d59a9f50942c275fca1e5f7b80f4698ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:45:27 np0005590528 podman[85721]: 2026-01-21 13:45:27.258991485 +0000 UTC m=+0.132160559 container start 534fa4fe41482b2f0b6a4ea9687ef5d59a9f50942c275fca1e5f7b80f4698ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:27 np0005590528 bash[85721]: 534fa4fe41482b2f0b6a4ea9687ef5d59a9f50942c275fca1e5f7b80f4698ff5
Jan 21 08:45:27 np0005590528 systemd[1]: Started Ceph osd.0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: pidfile_write: ignore empty --pid-file
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 21 08:45:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 21 08:45:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:27 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 21 08:45:27 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0400 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 21 08:45:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: load: jerasure load: lrc 
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount shared_bdev_used = 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: RocksDB version: 7.9.2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Git sha 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: DB SUMMARY
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: DB Session ID:  TDEQH9BGDEPQOYZNFORS
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: CURRENT file:  CURRENT
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                         Options.error_if_exists: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.create_if_missing: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                                     Options.env: 0x557eecd31ea0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                                Options.info_log: 0x557eedd8c8a0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                              Options.statistics: (nil)
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.use_fsync: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                              Options.db_log_dir: 
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.write_buffer_manager: 0x557eedc32b40
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.unordered_write: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.row_cache: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                              Options.wal_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.two_write_queues: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.wal_compression: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.atomic_flush: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.max_background_jobs: 4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.max_background_compactions: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.max_subcompactions: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.max_open_files: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Compression algorithms supported:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kZSTD supported: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kXpressCompression supported: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kZlibCompression supported: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd35a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd35a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd35a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 913d458a-c02d-4bc9-b6ba-f790bdbfb0ef
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127690333, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127692308, "job": 1, "event": "recovery_finished"}
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: freelist init
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: freelist _read_cfg
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs umount
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluefs mount shared_bdev_used = 27262976
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: RocksDB version: 7.9.2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Git sha 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: DB SUMMARY
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: DB Session ID:  TDEQH9BGDEPQOYZNFORT
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: CURRENT file:  CURRENT
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                         Options.error_if_exists: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.create_if_missing: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                                     Options.env: 0x557eecd31ce0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                                Options.info_log: 0x557eedddd760
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                              Options.statistics: (nil)
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.use_fsync: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                              Options.db_log_dir: 
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.write_buffer_manager: 0x557eedc32b40
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.unordered_write: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.row_cache: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                              Options.wal_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.two_write_queues: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.wal_compression: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.atomic_flush: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.max_background_jobs: 4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.max_background_compactions: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.max_subcompactions: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.max_open_files: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Compression algorithms supported:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kZSTD supported: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kXpressCompression supported: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kZlibCompression supported: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8d0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd35a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8d0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd35a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8d0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557eecd35a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 913d458a-c02d-4bc9-b6ba-f790bdbfb0ef
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127745117, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127749999, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003127, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "913d458a-c02d-4bc9-b6ba-f790bdbfb0ef", "db_session_id": "TDEQH9BGDEPQOYZNFORT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127753026, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003127, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "913d458a-c02d-4bc9-b6ba-f790bdbfb0ef", "db_session_id": "TDEQH9BGDEPQOYZNFORT", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127756033, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003127, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "913d458a-c02d-4bc9-b6ba-f790bdbfb0ef", "db_session_id": "TDEQH9BGDEPQOYZNFORT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127757495, "job": 1, "event": "recovery_finished"}
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557eedd8e000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: DB pointer 0x557eedf46000
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usag
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: _get_class not permitted to load lua
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: _get_class not permitted to load sdk
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: osd.0 0 load_pgs
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: osd.0 0 load_pgs opened 0 pgs
Jan 21 08:45:27 np0005590528 ceph-osd[85740]: osd.0 0 log_to_monitors true
Jan 21 08:45:27 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0[85736]: 2026-01-21T13:45:27.787+0000 7f7953eb78c0 -1 osd.0 0 log_to_monitors true
Jan 21 08:45:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 21 08:45:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 21 08:45:27 np0005590528 podman[86288]: 2026-01-21 13:45:27.914481301 +0000 UTC m=+0.040852157 container create ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:45:27 np0005590528 systemd[1]: Started libpod-conmon-ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd.scope.
Jan 21 08:45:27 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:27 np0005590528 podman[86288]: 2026-01-21 13:45:27.993318246 +0000 UTC m=+0.119689132 container init ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 08:45:27 np0005590528 podman[86288]: 2026-01-21 13:45:27.898447829 +0000 UTC m=+0.024818715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:27 np0005590528 podman[86288]: 2026-01-21 13:45:27.999368581 +0000 UTC m=+0.125739477 container start ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:28 np0005590528 exciting_haibt[86305]: 167 167
Jan 21 08:45:28 np0005590528 systemd[1]: libpod-ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd.scope: Deactivated successfully.
Jan 21 08:45:28 np0005590528 podman[86288]: 2026-01-21 13:45:28.00478358 +0000 UTC m=+0.131154466 container attach ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:28 np0005590528 podman[86288]: 2026-01-21 13:45:28.005023286 +0000 UTC m=+0.131394162 container died ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:28 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c032c07956f5fd9cb2913e23a301b9a117baadbe0fb73f38b1e55f7faa97fe84-merged.mount: Deactivated successfully.
Jan 21 08:45:28 np0005590528 podman[86288]: 2026-01-21 13:45:28.044655353 +0000 UTC m=+0.171026209 container remove ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 21 08:45:28 np0005590528 systemd[1]: libpod-conmon-ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd.scope: Deactivated successfully.
Jan 21 08:45:28 np0005590528 podman[86334]: 2026-01-21 13:45:28.332506993 +0000 UTC m=+0.062508685 container create d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: Deploying daemon osd.1 on compute-0
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:28 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:28 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:28 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:28 np0005590528 systemd[1]: Started libpod-conmon-d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631.scope.
Jan 21 08:45:28 np0005590528 podman[86334]: 2026-01-21 13:45:28.306695736 +0000 UTC m=+0.036697468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:28 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:28 np0005590528 podman[86334]: 2026-01-21 13:45:28.447079812 +0000 UTC m=+0.177081504 container init d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 08:45:28 np0005590528 podman[86334]: 2026-01-21 13:45:28.458338241 +0000 UTC m=+0.188339893 container start d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:28 np0005590528 podman[86334]: 2026-01-21 13:45:28.461826894 +0000 UTC m=+0.191828636 container attach d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 08:45:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test[86350]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 21 08:45:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test[86350]:                            [--no-systemd] [--no-tmpfs]
Jan 21 08:45:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test[86350]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 21 08:45:28 np0005590528 systemd[1]: libpod-d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631.scope: Deactivated successfully.
Jan 21 08:45:28 np0005590528 podman[86334]: 2026-01-21 13:45:28.655455641 +0000 UTC m=+0.385457333 container died d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:45:28 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd-merged.mount: Deactivated successfully.
Jan 21 08:45:28 np0005590528 podman[86334]: 2026-01-21 13:45:28.713157741 +0000 UTC m=+0.443159413 container remove d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 08:45:28 np0005590528 systemd[1]: libpod-conmon-d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631.scope: Deactivated successfully.
Jan 21 08:45:28 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 21 08:45:28 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 21 08:45:28 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:29 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:29 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:29 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:29 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 21 08:45:29 np0005590528 ceph-osd[85740]: osd.0 0 done with init, starting boot process
Jan 21 08:45:29 np0005590528 ceph-osd[85740]: osd.0 0 start_boot
Jan 21 08:45:29 np0005590528 ceph-osd[85740]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 21 08:45:29 np0005590528 ceph-osd[85740]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 21 08:45:29 np0005590528 ceph-osd[85740]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 21 08:45:29 np0005590528 ceph-osd[85740]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 21 08:45:29 np0005590528 ceph-osd[85740]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:29 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:29 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:29 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 08:45:29 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3094763527; not ready for session (expect reconnect)
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:29 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 21 08:45:29 np0005590528 ceph-mon[75031]: from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 08:45:29 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:29 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:29 np0005590528 systemd[1]: Starting Ceph osd.1 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:45:29 np0005590528 podman[86510]: 2026-01-21 13:45:29.830272211 +0000 UTC m=+0.059730459 container create 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 08:45:29 np0005590528 podman[86510]: 2026-01-21 13:45:29.796214647 +0000 UTC m=+0.025672905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:29 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:29 np0005590528 podman[86510]: 2026-01-21 13:45:29.974263442 +0000 UTC m=+0.203721750 container init 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 08:45:29 np0005590528 podman[86510]: 2026-01-21 13:45:29.985584573 +0000 UTC m=+0.215042801 container start 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:30 np0005590528 podman[86510]: 2026-01-21 13:45:30.006342249 +0000 UTC m=+0.235800507 container attach 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:30 np0005590528 bash[86510]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:30 np0005590528 bash[86510]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:30 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3094763527; not ready for session (expect reconnect)
Jan 21 08:45:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:30 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:30 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 08:45:30 np0005590528 ceph-mon[75031]: from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 08:45:30 np0005590528 lvm[86612]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:45:30 np0005590528 lvm[86612]: VG ceph_vg1 finished
Jan 21 08:45:30 np0005590528 lvm[86611]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:45:30 np0005590528 lvm[86611]: VG ceph_vg0 finished
Jan 21 08:45:30 np0005590528 lvm[86614]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:45:30 np0005590528 lvm[86614]: VG ceph_vg2 finished
Jan 21 08:45:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 08:45:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:30 np0005590528 bash[86510]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 08:45:30 np0005590528 bash[86510]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:30 np0005590528 bash[86510]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:30 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:31 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 08:45:31 np0005590528 bash[86510]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 08:45:31 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 21 08:45:31 np0005590528 bash[86510]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 21 08:45:31 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:31 np0005590528 bash[86510]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:31 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:31 np0005590528 bash[86510]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:31 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 21 08:45:31 np0005590528 bash[86510]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 21 08:45:31 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 08:45:31 np0005590528 bash[86510]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 08:45:31 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 21 08:45:31 np0005590528 bash[86510]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 21 08:45:31 np0005590528 systemd[1]: libpod-473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5.scope: Deactivated successfully.
Jan 21 08:45:31 np0005590528 systemd[1]: libpod-473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5.scope: Consumed 1.652s CPU time.
Jan 21 08:45:31 np0005590528 podman[86717]: 2026-01-21 13:45:31.299843405 +0000 UTC m=+0.047158118 container died 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 08:45:31 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3094763527; not ready for session (expect reconnect)
Jan 21 08:45:31 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 08:45:31 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6-merged.mount: Deactivated successfully.
Jan 21 08:45:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:31 np0005590528 podman[86717]: 2026-01-21 13:45:31.45404486 +0000 UTC m=+0.201359573 container remove 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 08:45:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:31 np0005590528 podman[86776]: 2026-01-21 13:45:31.733194393 +0000 UTC m=+0.044190858 container create 75f58788bd5e57ff46589e9f1af96c16843986114eb397264a3d93ae1812e893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74a339daec7eb078add16a2d9c45bf7ca64a4e835121c5db2c2bcb888ea8679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74a339daec7eb078add16a2d9c45bf7ca64a4e835121c5db2c2bcb888ea8679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74a339daec7eb078add16a2d9c45bf7ca64a4e835121c5db2c2bcb888ea8679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74a339daec7eb078add16a2d9c45bf7ca64a4e835121c5db2c2bcb888ea8679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74a339daec7eb078add16a2d9c45bf7ca64a4e835121c5db2c2bcb888ea8679/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:31 np0005590528 podman[86776]: 2026-01-21 13:45:31.711002042 +0000 UTC m=+0.021998567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:31 np0005590528 podman[86776]: 2026-01-21 13:45:31.827484066 +0000 UTC m=+0.138480531 container init 75f58788bd5e57ff46589e9f1af96c16843986114eb397264a3d93ae1812e893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:45:31 np0005590528 podman[86776]: 2026-01-21 13:45:31.83772351 +0000 UTC m=+0.148719975 container start 75f58788bd5e57ff46589e9f1af96c16843986114eb397264a3d93ae1812e893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 08:45:31 np0005590528 bash[86776]: 75f58788bd5e57ff46589e9f1af96c16843986114eb397264a3d93ae1812e893
Jan 21 08:45:31 np0005590528 systemd[1]: Started Ceph osd.1 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: pidfile_write: ignore empty --pid-file
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:31 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e400 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:32 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 21 08:45:32 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: load: jerasure load: lrc 
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount shared_bdev_used = 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: RocksDB version: 7.9.2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Git sha 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: DB SUMMARY
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: DB Session ID:  VBALY7Y4KVO2SNNGS5VC
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: CURRENT file:  CURRENT
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                         Options.error_if_exists: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.create_if_missing: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                                     Options.env: 0x5623517cfea0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                                Options.info_log: 0x5623528608a0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                              Options.statistics: (nil)
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.use_fsync: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                              Options.db_log_dir: 
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.write_buffer_manager: 0x562352706b40
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.unordered_write: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.row_cache: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                              Options.wal_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.two_write_queues: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.wal_compression: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.atomic_flush: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.max_background_jobs: 4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.max_background_compactions: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.max_subcompactions: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.max_open_files: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Compression algorithms supported:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kZSTD supported: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kXpressCompression supported: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kZlibCompression supported: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d58d205c-2573-48b5-a4ae-6f3ea37ef9cd
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132265992, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132267913, "job": 1, "event": "recovery_finished"}
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: freelist init
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: freelist _read_cfg
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs umount
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluefs mount shared_bdev_used = 27262976
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: RocksDB version: 7.9.2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Git sha 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: DB SUMMARY
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: DB Session ID:  VBALY7Y4KVO2SNNGS5VD
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: CURRENT file:  CURRENT
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                         Options.error_if_exists: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.create_if_missing: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                                     Options.env: 0x5623517cfce0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                                Options.info_log: 0x562352860960
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                              Options.statistics: (nil)
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.use_fsync: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                              Options.db_log_dir: 
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.write_buffer_manager: 0x562352706b40
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.unordered_write: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.row_cache: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                              Options.wal_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.two_write_queues: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.wal_compression: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.atomic_flush: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.max_background_jobs: 4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.max_background_compactions: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.max_subcompactions: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.max_open_files: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Compression algorithms supported:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kZSTD supported: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kXpressCompression supported: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kZlibCompression supported: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5623528610c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5623528610c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5623528610c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5623517d3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d58d205c-2573-48b5-a4ae-6f3ea37ef9cd
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132326566, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132335433, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003132, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d58d205c-2573-48b5-a4ae-6f3ea37ef9cd", "db_session_id": "VBALY7Y4KVO2SNNGS5VD", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:45:32 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3094763527; not ready for session (expect reconnect)
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:32 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132368803, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003132, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d58d205c-2573-48b5-a4ae-6f3ea37ef9cd", "db_session_id": "VBALY7Y4KVO2SNNGS5VD", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132374844, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003132, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d58d205c-2573-48b5-a4ae-6f3ea37ef9cd", "db_session_id": "VBALY7Y4KVO2SNNGS5VD", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132407625, "job": 1, "event": "recovery_finished"}
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562352886000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: DB pointer 0x562352a1a000
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usag
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: _get_class not permitted to load lua
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: _get_class not permitted to load sdk
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: osd.1 0 load_pgs
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: osd.1 0 load_pgs opened 0 pgs
Jan 21 08:45:32 np0005590528 ceph-osd[86795]: osd.1 0 log_to_monitors true
Jan 21 08:45:32 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1[86791]: 2026-01-21T13:45:32.561+0000 7f09ad8058c0 -1 osd.1 0 log_to_monitors true
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 21 08:45:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 21 08:45:32 np0005590528 podman[87334]: 2026-01-21 13:45:32.672129324 +0000 UTC m=+0.063696353 container create e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:32 np0005590528 podman[87334]: 2026-01-21 13:45:32.62804331 +0000 UTC m=+0.019610359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:32 np0005590528 systemd[1]: Started libpod-conmon-e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8.scope.
Jan 21 08:45:32 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:32 np0005590528 podman[87334]: 2026-01-21 13:45:32.791973318 +0000 UTC m=+0.183540427 container init e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:32 np0005590528 podman[87334]: 2026-01-21 13:45:32.805283336 +0000 UTC m=+0.196850365 container start e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 08:45:32 np0005590528 heuristic_bhaskara[87350]: 167 167
Jan 21 08:45:32 np0005590528 systemd[1]: libpod-e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8.scope: Deactivated successfully.
Jan 21 08:45:32 np0005590528 conmon[87350]: conmon e1dde2ca6305377843c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8.scope/container/memory.events
Jan 21 08:45:32 np0005590528 podman[87334]: 2026-01-21 13:45:32.814922996 +0000 UTC m=+0.206490025 container attach e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:32 np0005590528 podman[87334]: 2026-01-21 13:45:32.815710176 +0000 UTC m=+0.207277235 container died e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 08:45:32 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d73f4af2fb1f88925249c286c1dbedb3feca8ced2e398f9d9f129ae30ab91b95-merged.mount: Deactivated successfully.
Jan 21 08:45:32 np0005590528 podman[87334]: 2026-01-21 13:45:32.940100269 +0000 UTC m=+0.331667308 container remove e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:32 np0005590528 systemd[1]: libpod-conmon-e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8.scope: Deactivated successfully.
Jan 21 08:45:32 np0005590528 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: Deploying daemon osd.2 on compute-0
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 21 08:45:33 np0005590528 podman[87381]: 2026-01-21 13:45:33.204873747 +0000 UTC m=+0.059515634 container create a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:33 np0005590528 systemd[1]: Started libpod-conmon-a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358.scope.
Jan 21 08:45:33 np0005590528 podman[87381]: 2026-01-21 13:45:33.166684954 +0000 UTC m=+0.021326871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:33 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:33 np0005590528 podman[87381]: 2026-01-21 13:45:33.319682401 +0000 UTC m=+0.174324308 container init a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:33 np0005590528 podman[87381]: 2026-01-21 13:45:33.326610336 +0000 UTC m=+0.181252223 container start a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:33 np0005590528 podman[87381]: 2026-01-21 13:45:33.336063222 +0000 UTC m=+0.190705109 container attach a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:33 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3094763527; not ready for session (expect reconnect)
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:33 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.383 iops: 6753.955 elapsed_sec: 0.444
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: log_channel(cluster) log [WRN] : OSD bench result of 6753.955447 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 08:45:33 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0[85736]: 2026-01-21T13:45:33.399+0000 7f795064b640 -1 osd.0 0 waiting for initial osdmap
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: osd.0 0 waiting for initial osdmap
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 08:45:33 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0[85736]: 2026-01-21T13:45:33.421+0000 7f794ac3e640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: osd.0 8 set_numa_affinity not setting numa affinity
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 21 08:45:33 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test[87396]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 21 08:45:33 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test[87396]:                            [--no-systemd] [--no-tmpfs]
Jan 21 08:45:33 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test[87396]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 21 08:45:33 np0005590528 systemd[1]: libpod-a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358.scope: Deactivated successfully.
Jan 21 08:45:33 np0005590528 podman[87381]: 2026-01-21 13:45:33.513078603 +0000 UTC m=+0.367720490 container died a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 21 08:45:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527] boot
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Jan 21 08:45:33 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1-merged.mount: Deactivated successfully.
Jan 21 08:45:33 np0005590528 ceph-osd[85740]: osd.0 9 state: booting -> active
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:33 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:33 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:33 np0005590528 podman[87381]: 2026-01-21 13:45:33.550416825 +0000 UTC m=+0.405058712 container remove a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 08:45:33 np0005590528 systemd[1]: libpod-conmon-a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358.scope: Deactivated successfully.
Jan 21 08:45:33 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 21 08:45:33 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 21 08:45:33 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:33 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:33 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: OSD bench result of 6753.955447 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527] boot
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 08:45:34 np0005590528 systemd[1]: Reloading.
Jan 21 08:45:34 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:45:34 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:45:34 np0005590528 systemd[1]: Starting Ceph osd.2 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Jan 21 08:45:34 np0005590528 ceph-osd[86795]: osd.1 0 done with init, starting boot process
Jan 21 08:45:34 np0005590528 ceph-osd[86795]: osd.1 0 start_boot
Jan 21 08:45:34 np0005590528 ceph-osd[86795]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 21 08:45:34 np0005590528 ceph-osd[86795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 21 08:45:34 np0005590528 ceph-osd[86795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 21 08:45:34 np0005590528 ceph-osd[86795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 21 08:45:34 np0005590528 ceph-osd[86795]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:34 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:34 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:34 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2636246499; not ready for session (expect reconnect)
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:34 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:34 np0005590528 podman[87558]: 2026-01-21 13:45:34.629341192 +0000 UTC m=+0.053564201 container create b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 21 08:45:34 np0005590528 podman[87558]: 2026-01-21 13:45:34.599294545 +0000 UTC m=+0.023517584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:34 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:34 np0005590528 podman[87558]: 2026-01-21 13:45:34.76523141 +0000 UTC m=+0.189454449 container init b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:34 np0005590528 podman[87558]: 2026-01-21 13:45:34.77568242 +0000 UTC m=+0.199905439 container start b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:34 np0005590528 podman[87558]: 2026-01-21 13:45:34.793861204 +0000 UTC m=+0.218084323 container attach b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 08:45:34 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:34 np0005590528 bash[87558]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:34 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:34 np0005590528 bash[87558]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:34 np0005590528 ceph-mgr[75322]: [devicehealth INFO root] creating mgr pool
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 21 08:45:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 21 08:45:35 np0005590528 lvm[87662]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:45:35 np0005590528 lvm[87662]: VG ceph_vg1 finished
Jan 21 08:45:35 np0005590528 lvm[87661]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:45:35 np0005590528 lvm[87661]: VG ceph_vg0 finished
Jan 21 08:45:35 np0005590528 lvm[87664]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:45:35 np0005590528 lvm[87664]: VG ceph_vg2 finished
Jan 21 08:45:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 08:45:35 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2636246499; not ready for session (expect reconnect)
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:35 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:35 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:35 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 21 08:45:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 21 08:45:35 np0005590528 ceph-osd[85740]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 21 08:45:35 np0005590528 ceph-osd[85740]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 21 08:45:35 np0005590528 ceph-osd[85740]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 21 08:45:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 08:45:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:35 np0005590528 bash[87558]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 08:45:35 np0005590528 bash[87558]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:35 np0005590528 bash[87558]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 08:45:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 08:45:35 np0005590528 bash[87558]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 08:45:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 21 08:45:35 np0005590528 bash[87558]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 21 08:45:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:35 np0005590528 bash[87558]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:35 np0005590528 bash[87558]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 21 08:45:35 np0005590528 bash[87558]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 21 08:45:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 08:45:35 np0005590528 bash[87558]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 08:45:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 21 08:45:35 np0005590528 bash[87558]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 21 08:45:35 np0005590528 systemd[1]: libpod-b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04.scope: Deactivated successfully.
Jan 21 08:45:35 np0005590528 systemd[1]: libpod-b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04.scope: Consumed 1.481s CPU time.
Jan 21 08:45:35 np0005590528 podman[87558]: 2026-01-21 13:45:35.879177185 +0000 UTC m=+1.303400194 container died b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 08:45:35 np0005590528 systemd[1]: var-lib-containers-storage-overlay-68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649-merged.mount: Deactivated successfully.
Jan 21 08:45:36 np0005590528 podman[87558]: 2026-01-21 13:45:36.003617099 +0000 UTC m=+1.427840128 container remove b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:36 np0005590528 podman[87823]: 2026-01-21 13:45:36.245404597 +0000 UTC m=+0.070887044 container create 391c65d49d06996033f966187742c0fd8d42ad35a268091a77911a32009e3e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 08:45:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e72ca374161b68879359b3ee3ec8d2418551af35c37f02b32bc6be3fa7b7fbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e72ca374161b68879359b3ee3ec8d2418551af35c37f02b32bc6be3fa7b7fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e72ca374161b68879359b3ee3ec8d2418551af35c37f02b32bc6be3fa7b7fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e72ca374161b68879359b3ee3ec8d2418551af35c37f02b32bc6be3fa7b7fbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e72ca374161b68879359b3ee3ec8d2418551af35c37f02b32bc6be3fa7b7fbb/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:36 np0005590528 podman[87823]: 2026-01-21 13:45:36.202670456 +0000 UTC m=+0.028152903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:36 np0005590528 podman[87823]: 2026-01-21 13:45:36.329972639 +0000 UTC m=+0.155455126 container init 391c65d49d06996033f966187742c0fd8d42ad35a268091a77911a32009e3e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 08:45:36 np0005590528 podman[87823]: 2026-01-21 13:45:36.335914901 +0000 UTC m=+0.161397338 container start 391c65d49d06996033f966187742c0fd8d42ad35a268091a77911a32009e3e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:36 np0005590528 bash[87823]: 391c65d49d06996033f966187742c0fd8d42ad35a268091a77911a32009e3e7a
Jan 21 08:45:36 np0005590528 systemd[1]: Started Ceph osd.2 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: pidfile_write: ignore empty --pid-file
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12400 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2636246499; not ready for session (expect reconnect)
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:36 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: load: jerasure load: lrc 
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:36 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:36 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount shared_bdev_used = 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: RocksDB version: 7.9.2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Git sha 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: DB SUMMARY
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: DB Session ID:  1BG515NGS1LBB8AUUWF8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: CURRENT file:  CURRENT
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                         Options.error_if_exists: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.create_if_missing: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                                     Options.env: 0x55794fca3ea0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                                Options.info_log: 0x557950cfe8a0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                              Options.statistics: (nil)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.use_fsync: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                              Options.db_log_dir: 
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.write_buffer_manager: 0x557950ba4b40
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.unordered_write: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.row_cache: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                              Options.wal_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.two_write_queues: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.wal_compression: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.atomic_flush: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.max_background_jobs: 4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.max_background_compactions: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.max_subcompactions: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.max_open_files: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Compression algorithms supported:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kZSTD supported: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kXpressCompression supported: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kZlibCompression supported: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d483eaa3-2246-48d8-b690-0a189d5aa6bb
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136777611, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136779831, "job": 1, "event": "recovery_finished"}
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: freelist init
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: freelist _read_cfg
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs umount
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluefs mount shared_bdev_used = 27262976
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: RocksDB version: 7.9.2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Git sha 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: DB SUMMARY
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: DB Session ID:  1BG515NGS1LBB8AUUWF9
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: CURRENT file:  CURRENT
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                         Options.error_if_exists: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.create_if_missing: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                                     Options.env: 0x557950af9f80
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                                Options.info_log: 0x557950d0b2a0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                              Options.statistics: (nil)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.use_fsync: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                              Options.db_log_dir: 
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.write_buffer_manager: 0x557950ba5900
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.unordered_write: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.row_cache: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                              Options.wal_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.two_write_queues: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.wal_compression: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.atomic_flush: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.max_background_jobs: 4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.max_background_compactions: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.max_subcompactions: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.max_open_files: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Compression algorithms supported:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kZSTD supported: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kXpressCompression supported: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kBZip2Compression supported: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kLZ4Compression supported: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kZlibCompression supported: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: #011kSnappyCompression supported: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55794fca74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d483eaa3-2246-48d8-b690-0a189d5aa6bb
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136874055, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136886035, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003136, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d483eaa3-2246-48d8-b690-0a189d5aa6bb", "db_session_id": "1BG515NGS1LBB8AUUWF9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136889244, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003136, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d483eaa3-2246-48d8-b690-0a189d5aa6bb", "db_session_id": "1BG515NGS1LBB8AUUWF9", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136919472, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003136, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d483eaa3-2246-48d8-b690-0a189d5aa6bb", "db_session_id": "1BG515NGS1LBB8AUUWF9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136921854, "job": 1, "event": "recovery_finished"}
Jan 21 08:45:36 np0005590528 ceph-osd[87843]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 21 08:45:36 np0005590528 podman[88323]: 2026-01-21 13:45:36.995303421 +0000 UTC m=+0.057145197 container create 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557950ee3c00
Jan 21 08:45:37 np0005590528 podman[88323]: 2026-01-21 13:45:36.966237846 +0000 UTC m=+0.028079622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: rocksdb: DB pointer 0x557950eb8000
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB us
Jan 21 08:45:37 np0005590528 systemd[1]: Started libpod-conmon-0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350.scope.
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: _get_class not permitted to load lua
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: _get_class not permitted to load sdk
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: osd.2 0 load_pgs
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: osd.2 0 load_pgs opened 0 pgs
Jan 21 08:45:37 np0005590528 ceph-osd[87843]: osd.2 0 log_to_monitors true
Jan 21 08:45:37 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2[87839]: 2026-01-21T13:45:37.068+0000 7f6e1fa298c0 -1 osd.2 0 log_to_monitors true
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 21 08:45:37 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 21 08:45:37 np0005590528 podman[88323]: 2026-01-21 13:45:37.125604105 +0000 UTC m=+0.187445891 container init 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 08:45:37 np0005590528 podman[88323]: 2026-01-21 13:45:37.1337605 +0000 UTC m=+0.195602266 container start 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:37 np0005590528 unruffled_villani[88344]: 167 167
Jan 21 08:45:37 np0005590528 systemd[1]: libpod-0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350.scope: Deactivated successfully.
Jan 21 08:45:37 np0005590528 podman[88323]: 2026-01-21 13:45:37.156075954 +0000 UTC m=+0.217917720 container attach 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:37 np0005590528 podman[88323]: 2026-01-21 13:45:37.156962354 +0000 UTC m=+0.218804120 container died 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:37 np0005590528 systemd[1]: var-lib-containers-storage-overlay-172315e706a0db28320083bb67857eb543d27c1562bdb0a6037e88a20cbdda84-merged.mount: Deactivated successfully.
Jan 21 08:45:37 np0005590528 podman[88323]: 2026-01-21 13:45:37.301695134 +0000 UTC m=+0.363536900 container remove 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 08:45:37 np0005590528 systemd[1]: libpod-conmon-0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350.scope: Deactivated successfully.
Jan 21 08:45:37 np0005590528 podman[88396]: 2026-01-21 13:45:37.466526913 +0000 UTC m=+0.053461149 container create c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 21 08:45:37 np0005590528 podman[88396]: 2026-01-21 13:45:37.437096439 +0000 UTC m=+0.024030725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:37 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2636246499; not ready for session (expect reconnect)
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:37 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:37 np0005590528 systemd[1]: Started libpod-conmon-c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f.scope.
Jan 21 08:45:37 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:37 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06f130edd009eb39ff49ac7de9bf15ce6327f74b180c2e6429580a50f6be2b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:37 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06f130edd009eb39ff49ac7de9bf15ce6327f74b180c2e6429580a50f6be2b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:37 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06f130edd009eb39ff49ac7de9bf15ce6327f74b180c2e6429580a50f6be2b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:37 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06f130edd009eb39ff49ac7de9bf15ce6327f74b180c2e6429580a50f6be2b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 21 08:45:37 np0005590528 podman[88396]: 2026-01-21 13:45:37.615947174 +0000 UTC m=+0.202881500 container init c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:37 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:37 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:37 np0005590528 podman[88396]: 2026-01-21 13:45:37.628042224 +0000 UTC m=+0.214976500 container start c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 08:45:37 np0005590528 podman[88396]: 2026-01-21 13:45:37.652444677 +0000 UTC m=+0.239378943 container attach c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 21 08:45:38 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 21 08:45:38 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 08:45:38 np0005590528 lvm[88487]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:45:38 np0005590528 lvm[88487]: VG ceph_vg0 finished
Jan 21 08:45:38 np0005590528 lvm[88489]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:45:38 np0005590528 lvm[88489]: VG ceph_vg1 finished
Jan 21 08:45:38 np0005590528 lvm[88490]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:45:38 np0005590528 lvm[88490]: VG ceph_vg2 finished
Jan 21 08:45:38 np0005590528 lvm[88491]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:45:38 np0005590528 lvm[88491]: VG ceph_vg0 finished
Jan 21 08:45:38 np0005590528 pedantic_golick[88412]: {}
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.576 iops: 6035.420 elapsed_sec: 0.497
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: log_channel(cluster) log [WRN] : OSD bench result of 6035.420070 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 08:45:38 np0005590528 systemd[1]: libpod-c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f.scope: Deactivated successfully.
Jan 21 08:45:38 np0005590528 systemd[1]: libpod-c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f.scope: Consumed 1.401s CPU time.
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 0 waiting for initial osdmap
Jan 21 08:45:38 np0005590528 podman[88396]: 2026-01-21 13:45:38.496609823 +0000 UTC m=+1.083544059 container died c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 08:45:38 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1[86791]: 2026-01-21T13:45:38.494+0000 7f09a9f99640 -1 osd.1 0 waiting for initial osdmap
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Jan 21 08:45:38 np0005590528 systemd[1]: var-lib-containers-storage-overlay-f06f130edd009eb39ff49ac7de9bf15ce6327f74b180c2e6429580a50f6be2b0-merged.mount: Deactivated successfully.
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 13 set_numa_affinity not setting numa affinity
Jan 21 08:45:38 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1[86791]: 2026-01-21T13:45:38.529+0000 7f09a458c640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 21 08:45:38 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2636246499; not ready for session (expect reconnect)
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:38 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 08:45:38 np0005590528 podman[88396]: 2026-01-21 13:45:38.547294464 +0000 UTC m=+1.134228700 container remove c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 08:45:38 np0005590528 systemd[1]: libpod-conmon-c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f.scope: Deactivated successfully.
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 21 08:45:38 np0005590528 ceph-osd[87843]: osd.2 0 done with init, starting boot process
Jan 21 08:45:38 np0005590528 ceph-osd[87843]: osd.2 0 start_boot
Jan 21 08:45:38 np0005590528 ceph-osd[87843]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 21 08:45:38 np0005590528 ceph-osd[87843]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 21 08:45:38 np0005590528 ceph-osd[87843]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 21 08:45:38 np0005590528 ceph-osd[87843]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 21 08:45:38 np0005590528 ceph-osd[87843]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499] boot
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 14 state: booting -> active
Jan 21 08:45:38 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:38 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:38 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2442756555; not ready for session (expect reconnect)
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:38 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: OSD bench result of 6035.420070 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499] boot
Jan 21 08:45:39 np0005590528 podman[88629]: 2026-01-21 13:45:39.188598812 +0000 UTC m=+0.066532091 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:39 np0005590528 podman[88629]: 2026-01-21 13:45:39.281191035 +0000 UTC m=+0.159124284 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 21 08:45:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:45:39
Jan 21 08:45:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:45:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 21 08:45:39 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2442756555; not ready for session (expect reconnect)
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:39 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:39 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:39 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:45:39 np0005590528 ceph-mgr[75322]: [devicehealth INFO root] creating main.db for devicehealth
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:39 np0005590528 ceph-mgr[75322]: [devicehealth INFO root] Check health
Jan 21 08:45:39 np0005590528 ceph-mgr[75322]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 21 08:45:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 21 08:45:40 np0005590528 podman[88852]: 2026-01-21 13:45:40.324290336 +0000 UTC m=+0.057508886 container create e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:45:40 np0005590528 podman[88852]: 2026-01-21 13:45:40.290410996 +0000 UTC m=+0.023629566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:40 np0005590528 systemd[1]: Started libpod-conmon-e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c.scope.
Jan 21 08:45:40 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:40 np0005590528 podman[88852]: 2026-01-21 13:45:40.450440691 +0000 UTC m=+0.183659261 container init e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:40 np0005590528 podman[88852]: 2026-01-21 13:45:40.457642714 +0000 UTC m=+0.190861264 container start e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:40 np0005590528 interesting_dhawan[88868]: 167 167
Jan 21 08:45:40 np0005590528 systemd[1]: libpod-e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c.scope: Deactivated successfully.
Jan 21 08:45:40 np0005590528 conmon[88868]: conmon e9e422b5a42a144fdf4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c.scope/container/memory.events
Jan 21 08:45:40 np0005590528 podman[88852]: 2026-01-21 13:45:40.488174193 +0000 UTC m=+0.221392773 container attach e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 08:45:40 np0005590528 podman[88852]: 2026-01-21 13:45:40.488497721 +0000 UTC m=+0.221716281 container died e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:40 np0005590528 systemd[1]: var-lib-containers-storage-overlay-22b4d159e22d7e8af2715224b3ec6b6360c8fe84f526ee77f816f4d33c405c4d-merged.mount: Deactivated successfully.
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2442756555; not ready for session (expect reconnect)
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:40 np0005590528 podman[88852]: 2026-01-21 13:45:40.64201175 +0000 UTC m=+0.375230310 container remove e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 21 08:45:40 np0005590528 systemd[1]: libpod-conmon-e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c.scope: Deactivated successfully.
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:40 np0005590528 podman[88894]: 2026-01-21 13:45:40.832596215 +0000 UTC m=+0.047266430 container create ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 08:45:40 np0005590528 systemd[1]: Started libpod-conmon-ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb.scope.
Jan 21 08:45:40 np0005590528 podman[88894]: 2026-01-21 13:45:40.81062809 +0000 UTC m=+0.025298315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:40 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:40 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334d24a41a1a5d51b86470f742b4508cecb59767ce7f671d025527e16f8b035a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:40 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334d24a41a1a5d51b86470f742b4508cecb59767ce7f671d025527e16f8b035a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:40 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334d24a41a1a5d51b86470f742b4508cecb59767ce7f671d025527e16f8b035a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:40 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334d24a41a1a5d51b86470f742b4508cecb59767ce7f671d025527e16f8b035a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 21 08:45:40 np0005590528 podman[88894]: 2026-01-21 13:45:40.960874553 +0000 UTC m=+0.175544768 container init ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 08:45:40 np0005590528 podman[88894]: 2026-01-21 13:45:40.967841971 +0000 UTC m=+0.182512166 container start ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:45:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:45:40 np0005590528 podman[88894]: 2026-01-21 13:45:40.998357998 +0000 UTC m=+0.213028193 container attach ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.tnwklj(active, since 61s)
Jan 21 08:45:41 np0005590528 charming_kirch[88910]: [
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:    {
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        "available": false,
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        "being_replaced": false,
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        "ceph_device_lvm": false,
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        "lsm_data": {},
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        "lvs": [],
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        "path": "/dev/sr0",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        "rejected_reasons": [
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "Has a FileSystem",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "Insufficient space (<5GB)"
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        ],
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        "sys_api": {
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "actuators": null,
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "device_nodes": [
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:                "sr0"
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            ],
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "devname": "sr0",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "human_readable_size": "482.00 KB",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "id_bus": "ata",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "model": "QEMU DVD-ROM",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "nr_requests": "2",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "parent": "/dev/sr0",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "partitions": {},
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "path": "/dev/sr0",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "removable": "1",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "rev": "2.5+",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "ro": "0",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "rotational": "1",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "sas_address": "",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "sas_device_handle": "",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "scheduler_mode": "mq-deadline",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "sectors": 0,
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "sectorsize": "2048",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "size": 493568.0,
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "support_discard": "2048",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "type": "disk",
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:            "vendor": "QEMU"
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:        }
Jan 21 08:45:41 np0005590528 charming_kirch[88910]:    }
Jan 21 08:45:41 np0005590528 charming_kirch[88910]: ]
Jan 21 08:45:41 np0005590528 systemd[1]: libpod-ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb.scope: Deactivated successfully.
Jan 21 08:45:41 np0005590528 podman[88894]: 2026-01-21 13:45:41.49598854 +0000 UTC m=+0.710658735 container died ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 08:45:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 21 08:45:41 np0005590528 systemd[1]: var-lib-containers-storage-overlay-334d24a41a1a5d51b86470f742b4508cecb59767ce7f671d025527e16f8b035a-merged.mount: Deactivated successfully.
Jan 21 08:45:41 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2442756555; not ready for session (expect reconnect)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:41 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:41 np0005590528 python3[89735]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:41 np0005590528 podman[88894]: 2026-01-21 13:45:41.685912665 +0000 UTC m=+0.900582860 container remove ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:41 np0005590528 systemd[1]: libpod-conmon-ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb.scope: Deactivated successfully.
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:41 np0005590528 podman[89738]: 2026-01-21 13:45:41.753582508 +0000 UTC m=+0.048774169 container create 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 21 08:45:41 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43689k
Jan 21 08:45:41 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43689k
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 21 08:45:41 np0005590528 ceph-mgr[75322]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Jan 21 08:45:41 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:45:41 np0005590528 systemd[1]: Started libpod-conmon-669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17.scope.
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:45:41 np0005590528 podman[89738]: 2026-01-21 13:45:41.730575263 +0000 UTC m=+0.025766964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:45:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:45:41 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5597912661e6268110fc44df36d3bf4f683642e96a81b724a0082a0d5632c1aa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5597912661e6268110fc44df36d3bf4f683642e96a81b724a0082a0d5632c1aa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5597912661e6268110fc44df36d3bf4f683642e96a81b724a0082a0d5632c1aa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:41 np0005590528 podman[89738]: 2026-01-21 13:45:41.878828342 +0000 UTC m=+0.174020033 container init 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 21 08:45:41 np0005590528 podman[89738]: 2026-01-21 13:45:41.884929109 +0000 UTC m=+0.180120780 container start 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:41 np0005590528 podman[89738]: 2026-01-21 13:45:41.90857471 +0000 UTC m=+0.203766381 container attach 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:42 np0005590528 podman[89843]: 2026-01-21 13:45:42.330461363 +0000 UTC m=+0.117591440 container create 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 08:45:42 np0005590528 podman[89843]: 2026-01-21 13:45:42.246002485 +0000 UTC m=+0.033132592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.tnwklj(active, since 62s)
Jan 21 08:45:42 np0005590528 systemd[1]: Started libpod-conmon-8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9.scope.
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8715348' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 08:45:42 np0005590528 thirsty_elion[89757]: 
Jan 21 08:45:42 np0005590528 thirsty_elion[89757]: {"fsid":"2f0e9cad-f0a3-5869-9cc3-8d84d071866a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":81,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":2,"osd_up_since":1769003138,"num_in_osds":3,"osd_in_since":1769003119,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":894091264,"bytes_avail":42047193088,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-21T13:44:18:859596+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-21T13:45:41.522372+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 21 08:45:42 np0005590528 podman[89738]: 2026-01-21 13:45:42.4247519 +0000 UTC m=+0.719943571 container died 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 08:45:42 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:42 np0005590528 systemd[1]: libpod-669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17.scope: Deactivated successfully.
Jan 21 08:45:42 np0005590528 podman[89843]: 2026-01-21 13:45:42.480205958 +0000 UTC m=+0.267336055 container init 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:42 np0005590528 podman[89843]: 2026-01-21 13:45:42.492658118 +0000 UTC m=+0.279788195 container start 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:42 np0005590528 hardcore_hypatia[89860]: 167 167
Jan 21 08:45:42 np0005590528 systemd[1]: libpod-8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9.scope: Deactivated successfully.
Jan 21 08:45:42 np0005590528 systemd[1]: var-lib-containers-storage-overlay-5597912661e6268110fc44df36d3bf4f683642e96a81b724a0082a0d5632c1aa-merged.mount: Deactivated successfully.
Jan 21 08:45:42 np0005590528 podman[89843]: 2026-01-21 13:45:42.520425659 +0000 UTC m=+0.307555736 container attach 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:42 np0005590528 podman[89738]: 2026-01-21 13:45:42.553060566 +0000 UTC m=+0.848252237 container remove 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:42 np0005590528 podman[89843]: 2026-01-21 13:45:42.557590355 +0000 UTC m=+0.344720452 container died 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:42 np0005590528 systemd[1]: libpod-conmon-669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17.scope: Deactivated successfully.
Jan 21 08:45:42 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e65e958054cbb998c07528f0ff384e722fcecab73aec4ceb5ceef5fab0b756c5-merged.mount: Deactivated successfully.
Jan 21 08:45:42 np0005590528 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2442756555; not ready for session (expect reconnect)
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:42 np0005590528 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 08:45:42 np0005590528 podman[89843]: 2026-01-21 13:45:42.631009688 +0000 UTC m=+0.418139765 container remove 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:42 np0005590528 systemd[1]: libpod-conmon-8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9.scope: Deactivated successfully.
Jan 21 08:45:42 np0005590528 podman[89898]: 2026-01-21 13:45:42.797360354 +0000 UTC m=+0.052856717 container create e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:42 np0005590528 systemd[1]: Started libpod-conmon-e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629.scope.
Jan 21 08:45:42 np0005590528 podman[89898]: 2026-01-21 13:45:42.772309249 +0000 UTC m=+0.027805602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:42 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:42 np0005590528 podman[89898]: 2026-01-21 13:45:42.912725578 +0000 UTC m=+0.168221941 container init e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:42 np0005590528 podman[89898]: 2026-01-21 13:45:42.920174648 +0000 UTC m=+0.175670981 container start e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:42 np0005590528 podman[89898]: 2026-01-21 13:45:42.93017202 +0000 UTC m=+0.185668353 container attach e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:42 np0005590528 ceph-osd[87843]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.973 iops: 7416.996 elapsed_sec: 0.404
Jan 21 08:45:42 np0005590528 ceph-osd[87843]: log_channel(cluster) log [WRN] : OSD bench result of 7416.996137 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 08:45:42 np0005590528 ceph-osd[87843]: osd.2 0 waiting for initial osdmap
Jan 21 08:45:42 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2[87839]: 2026-01-21T13:45:42.930+0000 7f6e1c1bd640 -1 osd.2 0 waiting for initial osdmap
Jan 21 08:45:42 np0005590528 ceph-osd[87843]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 21 08:45:42 np0005590528 ceph-osd[87843]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 21 08:45:42 np0005590528 ceph-osd[87843]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 21 08:45:42 np0005590528 ceph-osd[87843]: osd.2 16 check_osdmap_features require_osd_release unknown -> tentacle
Jan 21 08:45:42 np0005590528 ceph-osd[87843]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 08:45:42 np0005590528 ceph-osd[87843]: osd.2 16 set_numa_affinity not setting numa affinity
Jan 21 08:45:42 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2[87839]: 2026-01-21T13:45:42.960+0000 7f6e167b0640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 08:45:42 np0005590528 ceph-osd[87843]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 21 08:45:43 np0005590528 python3[89943]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:43 np0005590528 podman[89946]: 2026-01-21 13:45:43.065764172 +0000 UTC m=+0.021551701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 21 08:45:43 np0005590528 podman[89946]: 2026-01-21 13:45:43.385617553 +0000 UTC m=+0.341405042 container create 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:43 np0005590528 friendly_cartwright[89918]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:45:43 np0005590528 friendly_cartwright[89918]: --> All data devices are unavailable
Jan 21 08:45:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Jan 21 08:45:43 np0005590528 systemd[1]: libpod-e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629.scope: Deactivated successfully.
Jan 21 08:45:43 np0005590528 podman[89898]: 2026-01-21 13:45:43.513232284 +0000 UTC m=+0.768728677 container died e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 21 08:45:43 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555] boot
Jan 21 08:45:43 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Jan 21 08:45:43 np0005590528 systemd[1]: Started libpod-conmon-6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894.scope.
Jan 21 08:45:43 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:43 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da28567b720fe8043f2ae903a33e259a488b1875a2724821e75bbc6102c8aa98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:43 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da28567b720fe8043f2ae903a33e259a488b1875a2724821e75bbc6102c8aa98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 08:45:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 08:45:43 np0005590528 ceph-osd[87843]: osd.2 17 state: booting -> active
Jan 21 08:45:43 np0005590528 ceph-mon[75031]: Adjusting osd_memory_target on compute-0 to 43689k
Jan 21 08:45:43 np0005590528 ceph-mon[75031]: Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Jan 21 08:45:43 np0005590528 ceph-mon[75031]: OSD bench result of 7416.996137 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 08:45:43 np0005590528 podman[89946]: 2026-01-21 13:45:43.62201472 +0000 UTC m=+0.577802289 container init 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 08:45:43 np0005590528 podman[89946]: 2026-01-21 13:45:43.631034807 +0000 UTC m=+0.586822326 container start 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 08:45:43 np0005590528 podman[89946]: 2026-01-21 13:45:43.640523586 +0000 UTC m=+0.596311175 container attach 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:45:43 np0005590528 systemd[1]: var-lib-containers-storage-overlay-4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946-merged.mount: Deactivated successfully.
Jan 21 08:45:43 np0005590528 podman[89898]: 2026-01-21 13:45:43.768090386 +0000 UTC m=+1.023586759 container remove e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 08:45:43 np0005590528 systemd[1]: libpod-conmon-e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629.scope: Deactivated successfully.
Jan 21 08:45:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 08:45:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4103319343' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:44 np0005590528 podman[90079]: 2026-01-21 13:45:44.233945101 +0000 UTC m=+0.054729062 container create 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:44 np0005590528 systemd[1]: Started libpod-conmon-9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59.scope.
Jan 21 08:45:44 np0005590528 podman[90079]: 2026-01-21 13:45:44.207066612 +0000 UTC m=+0.027850593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:44 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:44 np0005590528 podman[90079]: 2026-01-21 13:45:44.315056069 +0000 UTC m=+0.135840050 container init 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 08:45:44 np0005590528 podman[90079]: 2026-01-21 13:45:44.321293239 +0000 UTC m=+0.142077200 container start 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:44 np0005590528 podman[90079]: 2026-01-21 13:45:44.325778287 +0000 UTC m=+0.146562238 container attach 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:44 np0005590528 friendly_mendel[90096]: 167 167
Jan 21 08:45:44 np0005590528 systemd[1]: libpod-9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59.scope: Deactivated successfully.
Jan 21 08:45:44 np0005590528 podman[90079]: 2026-01-21 13:45:44.327309844 +0000 UTC m=+0.148093795 container died 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:44 np0005590528 systemd[1]: var-lib-containers-storage-overlay-7469e86b0810ee158e233ae6977860b97257686e3a088aa6c44769a360e53181-merged.mount: Deactivated successfully.
Jan 21 08:45:44 np0005590528 podman[90079]: 2026-01-21 13:45:44.373378306 +0000 UTC m=+0.194162257 container remove 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:44 np0005590528 systemd[1]: libpod-conmon-9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59.scope: Deactivated successfully.
Jan 21 08:45:44 np0005590528 podman[90119]: 2026-01-21 13:45:44.54462628 +0000 UTC m=+0.053368269 container create 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 08:45:44 np0005590528 systemd[1]: Started libpod-conmon-54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713.scope.
Jan 21 08:45:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 21 08:45:44 np0005590528 ceph-mon[75031]: osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555] boot
Jan 21 08:45:44 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/4103319343' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4103319343' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 21 08:45:44 np0005590528 festive_zhukovsky[89989]: pool 'vms' created
Jan 21 08:45:44 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 21 08:45:44 np0005590528 podman[90119]: 2026-01-21 13:45:44.525313264 +0000 UTC m=+0.034055283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:44 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c41ed8576e9bfab94bef5e918de7ed8d1e402da1d84d3c186bd166937709d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c41ed8576e9bfab94bef5e918de7ed8d1e402da1d84d3c186bd166937709d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c41ed8576e9bfab94bef5e918de7ed8d1e402da1d84d3c186bd166937709d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c41ed8576e9bfab94bef5e918de7ed8d1e402da1d84d3c186bd166937709d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:44 np0005590528 systemd[1]: libpod-6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894.scope: Deactivated successfully.
Jan 21 08:45:44 np0005590528 podman[89946]: 2026-01-21 13:45:44.636323903 +0000 UTC m=+1.592111392 container died 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 08:45:44 np0005590528 podman[90119]: 2026-01-21 13:45:44.666093652 +0000 UTC m=+0.174835671 container init 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:44 np0005590528 systemd[1]: var-lib-containers-storage-overlay-da28567b720fe8043f2ae903a33e259a488b1875a2724821e75bbc6102c8aa98-merged.mount: Deactivated successfully.
Jan 21 08:45:44 np0005590528 podman[90119]: 2026-01-21 13:45:44.676001981 +0000 UTC m=+0.184743980 container start 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:45:44 np0005590528 podman[89946]: 2026-01-21 13:45:44.694349164 +0000 UTC m=+1.650136683 container remove 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:44 np0005590528 podman[90119]: 2026-01-21 13:45:44.699818866 +0000 UTC m=+0.208560885 container attach 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:44 np0005590528 systemd[1]: libpod-conmon-6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894.scope: Deactivated successfully.
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]: {
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:    "0": [
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:        {
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "devices": [
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "/dev/loop3"
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            ],
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_name": "ceph_lv0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_size": "21470642176",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "name": "ceph_lv0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "tags": {
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.cluster_name": "ceph",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.crush_device_class": "",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.encrypted": "0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.objectstore": "bluestore",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.osd_id": "0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.type": "block",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.vdo": "0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.with_tpm": "0"
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            },
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "type": "block",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "vg_name": "ceph_vg0"
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:        }
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:    ],
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:    "1": [
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:        {
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "devices": [
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "/dev/loop4"
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            ],
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_name": "ceph_lv1",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_size": "21470642176",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "name": "ceph_lv1",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "tags": {
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.cluster_name": "ceph",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.crush_device_class": "",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.encrypted": "0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.objectstore": "bluestore",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.osd_id": "1",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.type": "block",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.vdo": "0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.with_tpm": "0"
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            },
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "type": "block",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "vg_name": "ceph_vg1"
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:        }
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:    ],
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:    "2": [
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:        {
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "devices": [
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "/dev/loop5"
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            ],
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_name": "ceph_lv2",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_size": "21470642176",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "name": "ceph_lv2",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "tags": {
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.cluster_name": "ceph",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.crush_device_class": "",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.encrypted": "0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.objectstore": "bluestore",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.osd_id": "2",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.type": "block",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.vdo": "0",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:                "ceph.with_tpm": "0"
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            },
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "type": "block",
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:            "vg_name": "ceph_vg2"
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:        }
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]:    ]
Jan 21 08:45:44 np0005590528 pensive_hellman[90135]: }
Jan 21 08:45:44 np0005590528 systemd[1]: libpod-54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713.scope: Deactivated successfully.
Jan 21 08:45:44 np0005590528 conmon[90135]: conmon 54fd8e316c134d32daf1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713.scope/container/memory.events
Jan 21 08:45:45 np0005590528 podman[90181]: 2026-01-21 13:45:45.042442066 +0000 UTC m=+0.028645722 container died 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 08:45:45 np0005590528 python3[90178]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v42: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 21 08:45:45 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:45:45 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/4103319343' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:45 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b6c41ed8576e9bfab94bef5e918de7ed8d1e402da1d84d3c186bd166937709d1-merged.mount: Deactivated successfully.
Jan 21 08:45:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 21 08:45:45 np0005590528 podman[90181]: 2026-01-21 13:45:45.866091709 +0000 UTC m=+0.852295395 container remove 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 21 08:45:45 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 21 08:45:45 np0005590528 systemd[1]: libpod-conmon-54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713.scope: Deactivated successfully.
Jan 21 08:45:45 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:45:45 np0005590528 podman[90194]: 2026-01-21 13:45:45.965290423 +0000 UTC m=+0.887238298 container create d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 08:45:45 np0005590528 systemd[1]: Started libpod-conmon-d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6.scope.
Jan 21 08:45:46 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:46 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6cb686b2c137970617a1bcacf25ec6fe5b4d14256e24f83102569416b75672/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:46 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6cb686b2c137970617a1bcacf25ec6fe5b4d14256e24f83102569416b75672/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:46 np0005590528 podman[90194]: 2026-01-21 13:45:45.938750033 +0000 UTC m=+0.860697938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:46 np0005590528 podman[90194]: 2026-01-21 13:45:46.038538472 +0000 UTC m=+0.960486377 container init d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:46 np0005590528 podman[90194]: 2026-01-21 13:45:46.046128245 +0000 UTC m=+0.968076130 container start d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:46 np0005590528 podman[90194]: 2026-01-21 13:45:46.051593857 +0000 UTC m=+0.973541732 container attach d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:46 np0005590528 podman[90295]: 2026-01-21 13:45:46.316290446 +0000 UTC m=+0.036320268 container create 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 08:45:46 np0005590528 systemd[1]: Started libpod-conmon-74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03.scope.
Jan 21 08:45:46 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:46 np0005590528 podman[90295]: 2026-01-21 13:45:46.385754423 +0000 UTC m=+0.105784265 container init 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 08:45:46 np0005590528 podman[90295]: 2026-01-21 13:45:46.392849495 +0000 UTC m=+0.112879317 container start 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:46 np0005590528 podman[90295]: 2026-01-21 13:45:46.300873904 +0000 UTC m=+0.020903756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:46 np0005590528 reverent_noether[90312]: 167 167
Jan 21 08:45:46 np0005590528 systemd[1]: libpod-74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03.scope: Deactivated successfully.
Jan 21 08:45:46 np0005590528 podman[90295]: 2026-01-21 13:45:46.405629073 +0000 UTC m=+0.125658975 container attach 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 08:45:46 np0005590528 podman[90295]: 2026-01-21 13:45:46.406160895 +0000 UTC m=+0.126190737 container died 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 08:45:46 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c01d9faf4e6817f7dfc0573dd3a5b294bf043d9421afc5480a815cc97d121db4-merged.mount: Deactivated successfully.
Jan 21 08:45:46 np0005590528 podman[90295]: 2026-01-21 13:45:46.448486967 +0000 UTC m=+0.168516799 container remove 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 08:45:46 np0005590528 systemd[1]: libpod-conmon-74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03.scope: Deactivated successfully.
Jan 21 08:45:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 08:45:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/267142281' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:46 np0005590528 podman[90339]: 2026-01-21 13:45:46.605264631 +0000 UTC m=+0.053737048 container create 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:45:46 np0005590528 systemd[1]: Started libpod-conmon-6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd.scope.
Jan 21 08:45:46 np0005590528 podman[90339]: 2026-01-21 13:45:46.579269154 +0000 UTC m=+0.027741651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:45:46 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:46 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2191c7d3f08c21564364c7db76c04b265152261c05af10f813a9f2af07523/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:46 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2191c7d3f08c21564364c7db76c04b265152261c05af10f813a9f2af07523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:46 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2191c7d3f08c21564364c7db76c04b265152261c05af10f813a9f2af07523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:46 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2191c7d3f08c21564364c7db76c04b265152261c05af10f813a9f2af07523/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:46 np0005590528 podman[90339]: 2026-01-21 13:45:46.708657737 +0000 UTC m=+0.157130194 container init 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 08:45:46 np0005590528 podman[90339]: 2026-01-21 13:45:46.722527562 +0000 UTC m=+0.171000019 container start 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 08:45:46 np0005590528 podman[90339]: 2026-01-21 13:45:46.727423641 +0000 UTC m=+0.175896108 container attach 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 08:45:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 21 08:45:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/267142281' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 21 08:45:46 np0005590528 elegant_haibt[90243]: pool 'volumes' created
Jan 21 08:45:46 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 21 08:45:46 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:45:46 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/267142281' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:46 np0005590528 systemd[1]: libpod-d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6.scope: Deactivated successfully.
Jan 21 08:45:46 np0005590528 podman[90194]: 2026-01-21 13:45:46.899078254 +0000 UTC m=+1.821026179 container died d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:45:46 np0005590528 systemd[1]: var-lib-containers-storage-overlay-5d6cb686b2c137970617a1bcacf25ec6fe5b4d14256e24f83102569416b75672-merged.mount: Deactivated successfully.
Jan 21 08:45:46 np0005590528 podman[90194]: 2026-01-21 13:45:46.952119774 +0000 UTC m=+1.874067689 container remove d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:45:46 np0005590528 systemd[1]: libpod-conmon-d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6.scope: Deactivated successfully.
Jan 21 08:45:47 np0005590528 python3[90414]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:47 np0005590528 podman[90449]: 2026-01-21 13:45:47.299220953 +0000 UTC m=+0.041522894 container create adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 08:45:47 np0005590528 systemd[1]: Started libpod-conmon-adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f.scope.
Jan 21 08:45:47 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a189270390c06c0329f38073aaa279d98f5bd4e082ed0a9a29c37b94cbafbc9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a189270390c06c0329f38073aaa279d98f5bd4e082ed0a9a29c37b94cbafbc9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:47 np0005590528 podman[90449]: 2026-01-21 13:45:47.376127569 +0000 UTC m=+0.118429530 container init adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:45:47 np0005590528 podman[90449]: 2026-01-21 13:45:47.281185867 +0000 UTC m=+0.023487828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:47 np0005590528 podman[90449]: 2026-01-21 13:45:47.382144264 +0000 UTC m=+0.124446205 container start adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:47 np0005590528 podman[90449]: 2026-01-21 13:45:47.386232923 +0000 UTC m=+0.128534864 container attach adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 08:45:47 np0005590528 lvm[90491]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:45:47 np0005590528 lvm[90491]: VG ceph_vg0 finished
Jan 21 08:45:47 np0005590528 lvm[90490]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:45:47 np0005590528 lvm[90490]: VG ceph_vg1 finished
Jan 21 08:45:47 np0005590528 lvm[90493]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:45:47 np0005590528 lvm[90493]: VG ceph_vg2 finished
Jan 21 08:45:47 np0005590528 lvm[90496]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:45:47 np0005590528 lvm[90496]: VG ceph_vg0 finished
Jan 21 08:45:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v45: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 21 08:45:47 np0005590528 modest_dirac[90355]: {}
Jan 21 08:45:47 np0005590528 systemd[1]: libpod-6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd.scope: Deactivated successfully.
Jan 21 08:45:47 np0005590528 podman[90339]: 2026-01-21 13:45:47.610604879 +0000 UTC m=+1.059077326 container died 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 08:45:47 np0005590528 systemd[1]: libpod-6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd.scope: Consumed 1.364s CPU time.
Jan 21 08:45:47 np0005590528 systemd[1]: var-lib-containers-storage-overlay-03f2191c7d3f08c21564364c7db76c04b265152261c05af10f813a9f2af07523-merged.mount: Deactivated successfully.
Jan 21 08:45:47 np0005590528 podman[90339]: 2026-01-21 13:45:47.655677808 +0000 UTC m=+1.104150245 container remove 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:47 np0005590528 systemd[1]: libpod-conmon-6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd.scope: Deactivated successfully.
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2835557232' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2835557232' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 21 08:45:47 np0005590528 eloquent_booth[90478]: pool 'backups' created
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 21 08:45:47 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/267142281' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2835557232' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:47 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2835557232' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:47 np0005590528 systemd[1]: libpod-adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f.scope: Deactivated successfully.
Jan 21 08:45:47 np0005590528 podman[90449]: 2026-01-21 13:45:47.903991081 +0000 UTC m=+0.646293042 container died adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 08:45:47 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9a189270390c06c0329f38073aaa279d98f5bd4e082ed0a9a29c37b94cbafbc9-merged.mount: Deactivated successfully.
Jan 21 08:45:47 np0005590528 podman[90449]: 2026-01-21 13:45:47.948594758 +0000 UTC m=+0.690896699 container remove adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:47 np0005590528 systemd[1]: libpod-conmon-adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f.scope: Deactivated successfully.
Jan 21 08:45:48 np0005590528 python3[90593]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:48 np0005590528 podman[90594]: 2026-01-21 13:45:48.40768979 +0000 UTC m=+0.063680269 container create 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 08:45:48 np0005590528 systemd[1]: Started libpod-conmon-89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1.scope.
Jan 21 08:45:48 np0005590528 podman[90594]: 2026-01-21 13:45:48.382464141 +0000 UTC m=+0.038454630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:48 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:48 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1cfae45ea9ade2a41d0fae9e1914f6cd9ff29fc9cf151a6b60ecdc4f54235f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:48 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1cfae45ea9ade2a41d0fae9e1914f6cd9ff29fc9cf151a6b60ecdc4f54235f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:48 np0005590528 podman[90594]: 2026-01-21 13:45:48.50547173 +0000 UTC m=+0.161462219 container init 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 21 08:45:48 np0005590528 podman[90594]: 2026-01-21 13:45:48.51166919 +0000 UTC m=+0.167659669 container start 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:48 np0005590528 podman[90594]: 2026-01-21 13:45:48.517175503 +0000 UTC m=+0.173166002 container attach 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:45:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 21 08:45:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 21 08:45:48 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 21 08:45:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 08:45:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/967236663' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:45:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v48: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 21 08:45:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 21 08:45:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/967236663' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 21 08:45:49 np0005590528 eager_roentgen[90609]: pool 'images' created
Jan 21 08:45:49 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 21 08:45:49 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:45:49 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/967236663' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:49 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/967236663' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:49 np0005590528 systemd[1]: libpod-89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1.scope: Deactivated successfully.
Jan 21 08:45:49 np0005590528 podman[90594]: 2026-01-21 13:45:49.940460058 +0000 UTC m=+1.596450527 container died 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:49 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6b1cfae45ea9ade2a41d0fae9e1914f6cd9ff29fc9cf151a6b60ecdc4f54235f-merged.mount: Deactivated successfully.
Jan 21 08:45:49 np0005590528 podman[90594]: 2026-01-21 13:45:49.979413499 +0000 UTC m=+1.635403968 container remove 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:49 np0005590528 systemd[1]: libpod-conmon-89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1.scope: Deactivated successfully.
Jan 21 08:45:50 np0005590528 python3[90672]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:50 np0005590528 podman[90673]: 2026-01-21 13:45:50.404189033 +0000 UTC m=+0.048759438 container create c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 08:45:50 np0005590528 systemd[1]: Started libpod-conmon-c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a.scope.
Jan 21 08:45:50 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:50 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ad0e0507424ea21e76b3510031fb9415f1aa3e69520b41384c6f33348ccee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:50 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ad0e0507424ea21e76b3510031fb9415f1aa3e69520b41384c6f33348ccee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:50 np0005590528 podman[90673]: 2026-01-21 13:45:50.383906183 +0000 UTC m=+0.028476598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:50 np0005590528 podman[90673]: 2026-01-21 13:45:50.490973428 +0000 UTC m=+0.135543843 container init c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:50 np0005590528 podman[90673]: 2026-01-21 13:45:50.49728173 +0000 UTC m=+0.141852135 container start c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 08:45:50 np0005590528 podman[90673]: 2026-01-21 13:45:50.501566273 +0000 UTC m=+0.146136668 container attach c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 21 08:45:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 21 08:45:50 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 21 08:45:50 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:45:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 08:45:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3279425259' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:50 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/3279425259' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v51: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:45:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 21 08:45:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3279425259' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 21 08:45:51 np0005590528 cranky_hertz[90689]: pool 'cephfs.cephfs.meta' created
Jan 21 08:45:51 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 21 08:45:51 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:45:51 np0005590528 systemd[1]: libpod-c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a.scope: Deactivated successfully.
Jan 21 08:45:51 np0005590528 podman[90673]: 2026-01-21 13:45:51.944645797 +0000 UTC m=+1.589216212 container died c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:51 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3c4ad0e0507424ea21e76b3510031fb9415f1aa3e69520b41384c6f33348ccee-merged.mount: Deactivated successfully.
Jan 21 08:45:51 np0005590528 podman[90673]: 2026-01-21 13:45:51.983803232 +0000 UTC m=+1.628373627 container remove c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 08:45:51 np0005590528 systemd[1]: libpod-conmon-c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a.scope: Deactivated successfully.
Jan 21 08:45:52 np0005590528 python3[90754]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:52 np0005590528 podman[90755]: 2026-01-21 13:45:52.342289275 +0000 UTC m=+0.044825923 container create 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:52 np0005590528 systemd[1]: Started libpod-conmon-5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521.scope.
Jan 21 08:45:52 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2f096741a432ad332453e79c2541e8a720501fdd6f8079150c2ac1dcbb27d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2f096741a432ad332453e79c2541e8a720501fdd6f8079150c2ac1dcbb27d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:52 np0005590528 podman[90755]: 2026-01-21 13:45:52.409719203 +0000 UTC m=+0.112255831 container init 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 08:45:52 np0005590528 podman[90755]: 2026-01-21 13:45:52.416436985 +0000 UTC m=+0.118973593 container start 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 08:45:52 np0005590528 podman[90755]: 2026-01-21 13:45:52.319026393 +0000 UTC m=+0.021563031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:52 np0005590528 podman[90755]: 2026-01-21 13:45:52.420650767 +0000 UTC m=+0.123187395 container attach 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 08:45:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 08:45:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3204446367' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 21 08:45:52 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/3279425259' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:52 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/3204446367' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 08:45:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3204446367' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 21 08:45:52 np0005590528 sad_swirles[90770]: pool 'cephfs.cephfs.data' created
Jan 21 08:45:52 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 21 08:45:52 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:45:52 np0005590528 systemd[1]: libpod-5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521.scope: Deactivated successfully.
Jan 21 08:45:52 np0005590528 podman[90755]: 2026-01-21 13:45:52.96600097 +0000 UTC m=+0.668537598 container died 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 08:45:52 np0005590528 systemd[1]: var-lib-containers-storage-overlay-bc2f096741a432ad332453e79c2541e8a720501fdd6f8079150c2ac1dcbb27d7-merged.mount: Deactivated successfully.
Jan 21 08:45:53 np0005590528 podman[90755]: 2026-01-21 13:45:53.001425206 +0000 UTC m=+0.703961814 container remove 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 08:45:53 np0005590528 systemd[1]: libpod-conmon-5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521.scope: Deactivated successfully.
Jan 21 08:45:53 np0005590528 python3[90836]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:53 np0005590528 podman[90837]: 2026-01-21 13:45:53.386255964 +0000 UTC m=+0.041733807 container create 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 08:45:53 np0005590528 systemd[1]: Started libpod-conmon-0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b.scope.
Jan 21 08:45:53 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:53 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ddecaf4929377705ecfd94fe960a272b2ee06c202bfe21bb31ab9b2131cd19/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:53 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ddecaf4929377705ecfd94fe960a272b2ee06c202bfe21bb31ab9b2131cd19/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:53 np0005590528 podman[90837]: 2026-01-21 13:45:53.363874904 +0000 UTC m=+0.019352767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:53 np0005590528 podman[90837]: 2026-01-21 13:45:53.463159431 +0000 UTC m=+0.118637294 container init 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 08:45:53 np0005590528 podman[90837]: 2026-01-21 13:45:53.472339043 +0000 UTC m=+0.127816886 container start 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:53 np0005590528 podman[90837]: 2026-01-21 13:45:53.475981531 +0000 UTC m=+0.131459384 container attach 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 08:45:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:45:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:45:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 21 08:45:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1458626047' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 21 08:45:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 21 08:45:53 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/3204446367' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 08:45:53 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1458626047' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 21 08:45:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1458626047' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 21 08:45:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 21 08:45:53 np0005590528 gracious_jones[90852]: enabled application 'rbd' on pool 'vms'
Jan 21 08:45:53 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 21 08:45:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:45:53 np0005590528 systemd[1]: libpod-0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b.scope: Deactivated successfully.
Jan 21 08:45:53 np0005590528 podman[90837]: 2026-01-21 13:45:53.980262523 +0000 UTC m=+0.635740366 container died 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 08:45:54 np0005590528 systemd[1]: var-lib-containers-storage-overlay-05ddecaf4929377705ecfd94fe960a272b2ee06c202bfe21bb31ab9b2131cd19-merged.mount: Deactivated successfully.
Jan 21 08:45:54 np0005590528 podman[90837]: 2026-01-21 13:45:54.0169925 +0000 UTC m=+0.672470353 container remove 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 08:45:54 np0005590528 systemd[1]: libpod-conmon-0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b.scope: Deactivated successfully.
Jan 21 08:45:54 np0005590528 python3[90916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:54 np0005590528 podman[90917]: 2026-01-21 13:45:54.359130758 +0000 UTC m=+0.048976133 container create d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:45:54 np0005590528 systemd[1]: Started libpod-conmon-d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee.scope.
Jan 21 08:45:54 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b607dcc9ea8f92838c82f5776e3c041cc002254b708f0786abf37554adb97b53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b607dcc9ea8f92838c82f5776e3c041cc002254b708f0786abf37554adb97b53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:54 np0005590528 podman[90917]: 2026-01-21 13:45:54.337285021 +0000 UTC m=+0.027130416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:54 np0005590528 podman[90917]: 2026-01-21 13:45:54.43213786 +0000 UTC m=+0.121983255 container init d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:45:54 np0005590528 podman[90917]: 2026-01-21 13:45:54.43876011 +0000 UTC m=+0.128605475 container start d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 08:45:54 np0005590528 podman[90917]: 2026-01-21 13:45:54.442281396 +0000 UTC m=+0.132126781 container attach d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 08:45:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 21 08:45:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2483823024' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 21 08:45:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 21 08:45:54 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1458626047' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 21 08:45:54 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2483823024' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 21 08:45:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2483823024' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 21 08:45:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 21 08:45:54 np0005590528 festive_heisenberg[90932]: enabled application 'rbd' on pool 'volumes'
Jan 21 08:45:54 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 21 08:45:55 np0005590528 systemd[1]: libpod-d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee.scope: Deactivated successfully.
Jan 21 08:45:55 np0005590528 podman[90917]: 2026-01-21 13:45:55.005677445 +0000 UTC m=+0.695522810 container died d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:55 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b607dcc9ea8f92838c82f5776e3c041cc002254b708f0786abf37554adb97b53-merged.mount: Deactivated successfully.
Jan 21 08:45:55 np0005590528 podman[90917]: 2026-01-21 13:45:55.048015917 +0000 UTC m=+0.737861282 container remove d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:55 np0005590528 systemd[1]: libpod-conmon-d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee.scope: Deactivated successfully.
Jan 21 08:45:55 np0005590528 python3[90995]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:55 np0005590528 podman[90996]: 2026-01-21 13:45:55.399655455 +0000 UTC m=+0.047923728 container create 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 08:45:55 np0005590528 systemd[1]: Started libpod-conmon-444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96.scope.
Jan 21 08:45:55 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc5217dd500724f41518dfd77886bbca11f1f3552df460bed4b7942b58a7e40/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc5217dd500724f41518dfd77886bbca11f1f3552df460bed4b7942b58a7e40/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:55 np0005590528 podman[90996]: 2026-01-21 13:45:55.373574876 +0000 UTC m=+0.021843179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:55 np0005590528 podman[90996]: 2026-01-21 13:45:55.479934653 +0000 UTC m=+0.128202926 container init 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:45:55 np0005590528 podman[90996]: 2026-01-21 13:45:55.484953324 +0000 UTC m=+0.133221607 container start 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 21 08:45:55 np0005590528 podman[90996]: 2026-01-21 13:45:55.488363617 +0000 UTC m=+0.136631890 container attach 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:45:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:45:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 21 08:45:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4007662532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 21 08:45:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 21 08:45:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4007662532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 21 08:45:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 21 08:45:55 np0005590528 peaceful_zhukovsky[91011]: enabled application 'rbd' on pool 'backups'
Jan 21 08:45:55 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2483823024' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 21 08:45:55 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/4007662532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 21 08:45:55 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 21 08:45:56 np0005590528 systemd[1]: libpod-444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96.scope: Deactivated successfully.
Jan 21 08:45:56 np0005590528 podman[90996]: 2026-01-21 13:45:56.006013812 +0000 UTC m=+0.654282095 container died 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 08:45:56 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1dc5217dd500724f41518dfd77886bbca11f1f3552df460bed4b7942b58a7e40-merged.mount: Deactivated successfully.
Jan 21 08:45:56 np0005590528 podman[90996]: 2026-01-21 13:45:56.043838225 +0000 UTC m=+0.692106498 container remove 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:45:56 np0005590528 systemd[1]: libpod-conmon-444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96.scope: Deactivated successfully.
Jan 21 08:45:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:45:56 np0005590528 python3[91071]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:56 np0005590528 podman[91072]: 2026-01-21 13:45:56.43430607 +0000 UTC m=+0.055283305 container create b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 08:45:56 np0005590528 systemd[1]: Started libpod-conmon-b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3.scope.
Jan 21 08:45:56 np0005590528 podman[91072]: 2026-01-21 13:45:56.411034419 +0000 UTC m=+0.032011654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:56 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a9bb50bb52a32994440e6946f007b47286777685a6f041f2dd25b5ab928917/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a9bb50bb52a32994440e6946f007b47286777685a6f041f2dd25b5ab928917/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:56 np0005590528 podman[91072]: 2026-01-21 13:45:56.525243985 +0000 UTC m=+0.146221250 container init b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 08:45:56 np0005590528 podman[91072]: 2026-01-21 13:45:56.53038778 +0000 UTC m=+0.151365025 container start b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:56 np0005590528 podman[91072]: 2026-01-21 13:45:56.536461826 +0000 UTC m=+0.157439091 container attach b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 08:45:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 21 08:45:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2452711801' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 21 08:45:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 21 08:45:56 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/4007662532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 21 08:45:56 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2452711801' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 21 08:45:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2452711801' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 21 08:45:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 21 08:45:57 np0005590528 lucid_almeida[91088]: enabled application 'rbd' on pool 'images'
Jan 21 08:45:57 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 21 08:45:57 np0005590528 podman[91072]: 2026-01-21 13:45:57.020810587 +0000 UTC m=+0.641787792 container died b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 08:45:57 np0005590528 systemd[1]: libpod-b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3.scope: Deactivated successfully.
Jan 21 08:45:57 np0005590528 systemd[1]: var-lib-containers-storage-overlay-41a9bb50bb52a32994440e6946f007b47286777685a6f041f2dd25b5ab928917-merged.mount: Deactivated successfully.
Jan 21 08:45:57 np0005590528 podman[91072]: 2026-01-21 13:45:57.072470695 +0000 UTC m=+0.693447900 container remove b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:45:57 np0005590528 systemd[1]: libpod-conmon-b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3.scope: Deactivated successfully.
Jan 21 08:45:57 np0005590528 python3[91151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:57 np0005590528 podman[91152]: 2026-01-21 13:45:57.445690254 +0000 UTC m=+0.039613608 container create 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 08:45:57 np0005590528 systemd[1]: Started libpod-conmon-6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7.scope.
Jan 21 08:45:57 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed8824be6801e88f908de58a65a43f79da871c9cdb7bf2981ad63c90c8d7ce8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed8824be6801e88f908de58a65a43f79da871c9cdb7bf2981ad63c90c8d7ce8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:57 np0005590528 podman[91152]: 2026-01-21 13:45:57.428640093 +0000 UTC m=+0.022563477 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:45:57 np0005590528 podman[91152]: 2026-01-21 13:45:57.53214303 +0000 UTC m=+0.126066384 container init 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:45:57 np0005590528 podman[91152]: 2026-01-21 13:45:57.538025303 +0000 UTC m=+0.131948667 container start 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 08:45:57 np0005590528 podman[91152]: 2026-01-21 13:45:57.54202327 +0000 UTC m=+0.135946644 container attach 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:45:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 21 08:45:57 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2465980745' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 21 08:45:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 21 08:45:58 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2452711801' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 21 08:45:58 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2465980745' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 21 08:45:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2465980745' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 21 08:45:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 21 08:45:58 np0005590528 cranky_napier[91167]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 21 08:45:58 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 21 08:45:58 np0005590528 systemd[1]: libpod-6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7.scope: Deactivated successfully.
Jan 21 08:45:58 np0005590528 podman[91192]: 2026-01-21 13:45:58.256322802 +0000 UTC m=+0.027035625 container died 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 08:45:58 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2ed8824be6801e88f908de58a65a43f79da871c9cdb7bf2981ad63c90c8d7ce8-merged.mount: Deactivated successfully.
Jan 21 08:45:58 np0005590528 podman[91192]: 2026-01-21 13:45:58.289638425 +0000 UTC m=+0.060351228 container remove 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 08:45:58 np0005590528 systemd[1]: libpod-conmon-6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7.scope: Deactivated successfully.
Jan 21 08:45:58 np0005590528 python3[91232]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:45:58 np0005590528 podman[91233]: 2026-01-21 13:45:58.701895787 +0000 UTC m=+0.058393101 container create 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 08:45:58 np0005590528 systemd[1]: Started libpod-conmon-569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5.scope.
Jan 21 08:45:58 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:45:58 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e897a81b6f57b8428b25d495a1c6322c428411d3c3ab1d350510a0f1a6ef2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:58 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e897a81b6f57b8428b25d495a1c6322c428411d3c3ab1d350510a0f1a6ef2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:45:58 np0005590528 podman[91233]: 2026-01-21 13:45:58.679440365 +0000 UTC m=+0.035937489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:45:58 np0005590528 podman[91233]: 2026-01-21 13:45:58.778477066 +0000 UTC m=+0.134974170 container init 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 08:45:58 np0005590528 podman[91233]: 2026-01-21 13:45:58.783752792 +0000 UTC m=+0.140249926 container start 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:45:58 np0005590528 podman[91233]: 2026-01-21 13:45:58.787463842 +0000 UTC m=+0.143960936 container attach 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:45:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 21 08:45:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2482580776' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 21 08:45:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 21 08:45:59 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2465980745' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 21 08:45:59 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2482580776' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 21 08:45:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2482580776' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 21 08:45:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 21 08:45:59 np0005590528 funny_ellis[91248]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 21 08:45:59 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 21 08:45:59 np0005590528 systemd[1]: libpod-569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5.scope: Deactivated successfully.
Jan 21 08:45:59 np0005590528 podman[91233]: 2026-01-21 13:45:59.234893863 +0000 UTC m=+0.591390957 container died 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 08:45:59 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ba4e897a81b6f57b8428b25d495a1c6322c428411d3c3ab1d350510a0f1a6ef2-merged.mount: Deactivated successfully.
Jan 21 08:45:59 np0005590528 podman[91233]: 2026-01-21 13:45:59.273243428 +0000 UTC m=+0.629740522 container remove 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 08:45:59 np0005590528 systemd[1]: libpod-conmon-569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5.scope: Deactivated successfully.
Jan 21 08:45:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:00 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2482580776' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 21 08:46:00 np0005590528 python3[91360]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:46:00 np0005590528 python3[91431]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003159.9659805-36598-104284940752309/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:46:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:01 np0005590528 python3[91533]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:46:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:01 np0005590528 python3[91608]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003161.094617-36612-233886849682918/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=75aa2a87d3a9bd957ba9f2b5be706649780713ad backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:46:02 np0005590528 python3[91658]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:02 np0005590528 podman[91659]: 2026-01-21 13:46:02.263588952 +0000 UTC m=+0.043154653 container create 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:02 np0005590528 systemd[1]: Started libpod-conmon-2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9.scope.
Jan 21 08:46:02 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552ca93064c06b2a11d064ed82ce4117c64ce9678cec9abfcd7258d4db4b6db8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552ca93064c06b2a11d064ed82ce4117c64ce9678cec9abfcd7258d4db4b6db8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552ca93064c06b2a11d064ed82ce4117c64ce9678cec9abfcd7258d4db4b6db8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:02 np0005590528 podman[91659]: 2026-01-21 13:46:02.247056213 +0000 UTC m=+0.026621934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:02 np0005590528 podman[91659]: 2026-01-21 13:46:02.345167391 +0000 UTC m=+0.124733142 container init 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 08:46:02 np0005590528 podman[91659]: 2026-01-21 13:46:02.351878403 +0000 UTC m=+0.131444104 container start 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 08:46:02 np0005590528 podman[91659]: 2026-01-21 13:46:02.354953177 +0000 UTC m=+0.134518928 container attach 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 08:46:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 21 08:46:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1327509013' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 08:46:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1327509013' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 08:46:02 np0005590528 nifty_jemison[91674]: 
Jan 21 08:46:02 np0005590528 nifty_jemison[91674]: [global]
Jan 21 08:46:02 np0005590528 nifty_jemison[91674]: #011fsid = 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 08:46:02 np0005590528 nifty_jemison[91674]: #011mon_host = 192.168.122.100
Jan 21 08:46:02 np0005590528 nifty_jemison[91674]: #011rgw_keystone_api_version = 3
Jan 21 08:46:02 np0005590528 systemd[1]: libpod-2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9.scope: Deactivated successfully.
Jan 21 08:46:02 np0005590528 podman[91659]: 2026-01-21 13:46:02.773221504 +0000 UTC m=+0.552787205 container died 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 08:46:02 np0005590528 systemd[1]: var-lib-containers-storage-overlay-552ca93064c06b2a11d064ed82ce4117c64ce9678cec9abfcd7258d4db4b6db8-merged.mount: Deactivated successfully.
Jan 21 08:46:02 np0005590528 podman[91659]: 2026-01-21 13:46:02.826635633 +0000 UTC m=+0.606201334 container remove 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:02 np0005590528 systemd[1]: libpod-conmon-2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9.scope: Deactivated successfully.
Jan 21 08:46:03 np0005590528 python3[91786]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:03 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1327509013' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 08:46:03 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1327509013' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 08:46:03 np0005590528 podman[91823]: 2026-01-21 13:46:03.250100554 +0000 UTC m=+0.043765507 container create 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 08:46:03 np0005590528 systemd[1]: Started libpod-conmon-841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b.scope.
Jan 21 08:46:03 np0005590528 podman[91837]: 2026-01-21 13:46:03.29836374 +0000 UTC m=+0.060310127 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:03 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f19319e99ed3a056af60e79eea469f30a7a2d8bcd64a2fd8e7d3347e04af1d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f19319e99ed3a056af60e79eea469f30a7a2d8bcd64a2fd8e7d3347e04af1d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f19319e99ed3a056af60e79eea469f30a7a2d8bcd64a2fd8e7d3347e04af1d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:03 np0005590528 podman[91823]: 2026-01-21 13:46:03.229442696 +0000 UTC m=+0.023107669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:03 np0005590528 podman[91823]: 2026-01-21 13:46:03.334524583 +0000 UTC m=+0.128189566 container init 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 08:46:03 np0005590528 podman[91823]: 2026-01-21 13:46:03.340346863 +0000 UTC m=+0.134011816 container start 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:03 np0005590528 podman[91823]: 2026-01-21 13:46:03.343814786 +0000 UTC m=+0.137479739 container attach 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 08:46:03 np0005590528 podman[91837]: 2026-01-21 13:46:03.402504783 +0000 UTC m=+0.164451200 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 21 08:46:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/188019200' entity='client.admin' 
Jan 21 08:46:03 np0005590528 quizzical_hofstadter[91859]: set ssl_option
Jan 21 08:46:03 np0005590528 systemd[1]: libpod-841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b.scope: Deactivated successfully.
Jan 21 08:46:03 np0005590528 podman[91823]: 2026-01-21 13:46:03.961675732 +0000 UTC m=+0.755340695 container died 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:03 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b1f19319e99ed3a056af60e79eea469f30a7a2d8bcd64a2fd8e7d3347e04af1d-merged.mount: Deactivated successfully.
Jan 21 08:46:04 np0005590528 podman[91823]: 2026-01-21 13:46:04.011083224 +0000 UTC m=+0.804748207 container remove 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 08:46:04 np0005590528 systemd[1]: libpod-conmon-841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b.scope: Deactivated successfully.
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:04 np0005590528 python3[92063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:04 np0005590528 podman[92101]: 2026-01-21 13:46:04.433732016 +0000 UTC m=+0.045728885 container create 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:04 np0005590528 systemd[1]: Started libpod-conmon-59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9.scope.
Jan 21 08:46:04 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33e89a4271820b6b6b4f0c028f5be57c08847657ec37dc1a341fb9597b97be9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33e89a4271820b6b6b4f0c028f5be57c08847657ec37dc1a341fb9597b97be9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33e89a4271820b6b6b4f0c028f5be57c08847657ec37dc1a341fb9597b97be9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:04 np0005590528 podman[92101]: 2026-01-21 13:46:04.413813415 +0000 UTC m=+0.025810334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:04 np0005590528 podman[92101]: 2026-01-21 13:46:04.518759778 +0000 UTC m=+0.130756697 container init 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 08:46:04 np0005590528 podman[92101]: 2026-01-21 13:46:04.527668093 +0000 UTC m=+0.139665002 container start 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:04 np0005590528 podman[92101]: 2026-01-21 13:46:04.531813134 +0000 UTC m=+0.143810023 container attach 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:46:04 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:46:04 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 21 08:46:04 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:04 np0005590528 flamboyant_payne[92117]: Scheduled rgw.rgw update...
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/188019200' entity='client.admin' 
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:46:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:04 np0005590528 systemd[1]: libpod-59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9.scope: Deactivated successfully.
Jan 21 08:46:04 np0005590528 podman[92101]: 2026-01-21 13:46:04.951823312 +0000 UTC m=+0.563820211 container died 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:04 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b33e89a4271820b6b6b4f0c028f5be57c08847657ec37dc1a341fb9597b97be9-merged.mount: Deactivated successfully.
Jan 21 08:46:04 np0005590528 podman[92101]: 2026-01-21 13:46:04.997376761 +0000 UTC m=+0.609373630 container remove 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:05 np0005590528 systemd[1]: libpod-conmon-59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9.scope: Deactivated successfully.
Jan 21 08:46:05 np0005590528 podman[92244]: 2026-01-21 13:46:05.324650391 +0000 UTC m=+0.049727551 container create fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:05 np0005590528 systemd[1]: Started libpod-conmon-fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8.scope.
Jan 21 08:46:05 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:05 np0005590528 podman[92244]: 2026-01-21 13:46:05.303551412 +0000 UTC m=+0.028628632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:05 np0005590528 podman[92244]: 2026-01-21 13:46:05.407584543 +0000 UTC m=+0.132661723 container init fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:05 np0005590528 podman[92244]: 2026-01-21 13:46:05.418740422 +0000 UTC m=+0.143817582 container start fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 08:46:05 np0005590528 modest_hellman[92260]: 167 167
Jan 21 08:46:05 np0005590528 podman[92244]: 2026-01-21 13:46:05.423419866 +0000 UTC m=+0.148497036 container attach fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:46:05 np0005590528 systemd[1]: libpod-fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8.scope: Deactivated successfully.
Jan 21 08:46:05 np0005590528 conmon[92260]: conmon fb094c73159caa40d79d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8.scope/container/memory.events
Jan 21 08:46:05 np0005590528 podman[92244]: 2026-01-21 13:46:05.425406373 +0000 UTC m=+0.150483543 container died fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:46:05 np0005590528 systemd[1]: var-lib-containers-storage-overlay-814dd41e7dfe4ab1e81c1ff5d5ce127b50dbb323c0ac52c0dd601c08ea05ad35-merged.mount: Deactivated successfully.
Jan 21 08:46:05 np0005590528 podman[92244]: 2026-01-21 13:46:05.462786466 +0000 UTC m=+0.187863626 container remove fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:05 np0005590528 systemd[1]: libpod-conmon-fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8.scope: Deactivated successfully.
Jan 21 08:46:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:05 np0005590528 podman[92284]: 2026-01-21 13:46:05.619058608 +0000 UTC m=+0.039514965 container create 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:05 np0005590528 systemd[1]: Started libpod-conmon-411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294.scope.
Jan 21 08:46:05 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:05 np0005590528 podman[92284]: 2026-01-21 13:46:05.601973745 +0000 UTC m=+0.022430122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:05 np0005590528 podman[92284]: 2026-01-21 13:46:05.6991355 +0000 UTC m=+0.119591877 container init 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:05 np0005590528 podman[92284]: 2026-01-21 13:46:05.705250319 +0000 UTC m=+0.125706676 container start 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 21 08:46:05 np0005590528 podman[92284]: 2026-01-21 13:46:05.708125398 +0000 UTC m=+0.128581755 container attach 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 08:46:05 np0005590528 ceph-mon[75031]: Saving service rgw.rgw spec with placement compute-0
Jan 21 08:46:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:06 np0005590528 python3[92388]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:46:06 np0005590528 flamboyant_blackwell[92301]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:46:06 np0005590528 flamboyant_blackwell[92301]: --> All data devices are unavailable
Jan 21 08:46:06 np0005590528 systemd[1]: libpod-411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294.scope: Deactivated successfully.
Jan 21 08:46:06 np0005590528 podman[92284]: 2026-01-21 13:46:06.20203129 +0000 UTC m=+0.622487647 container died 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 08:46:06 np0005590528 python3[92478]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003165.8595064-36653-261769494656179/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:46:07 np0005590528 python3[92528]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:07 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072-merged.mount: Deactivated successfully.
Jan 21 08:46:07 np0005590528 podman[92284]: 2026-01-21 13:46:07.138860633 +0000 UTC m=+1.559316980 container remove 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:07 np0005590528 systemd[1]: libpod-conmon-411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294.scope: Deactivated successfully.
Jan 21 08:46:07 np0005590528 podman[92529]: 2026-01-21 13:46:07.159628234 +0000 UTC m=+0.131158607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:07 np0005590528 podman[92529]: 2026-01-21 13:46:07.345755437 +0000 UTC m=+0.317285780 container create bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:07 np0005590528 systemd[1]: Started libpod-conmon-bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b.scope.
Jan 21 08:46:07 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:07 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba801e7969df5b75e9f9018e0a09090fc7b10a7cc738d42256afe03288fd222f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:07 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba801e7969df5b75e9f9018e0a09090fc7b10a7cc738d42256afe03288fd222f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:07 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba801e7969df5b75e9f9018e0a09090fc7b10a7cc738d42256afe03288fd222f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:07 np0005590528 podman[92529]: 2026-01-21 13:46:07.466549773 +0000 UTC m=+0.438080206 container init bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:07 np0005590528 podman[92529]: 2026-01-21 13:46:07.473357197 +0000 UTC m=+0.444887540 container start bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:07 np0005590528 podman[92529]: 2026-01-21 13:46:07.477706432 +0000 UTC m=+0.449236785 container attach bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 08:46:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:07 np0005590528 podman[92609]: 2026-01-21 13:46:07.573843362 +0000 UTC m=+0.044770542 container create 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:07 np0005590528 systemd[1]: Started libpod-conmon-7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44.scope.
Jan 21 08:46:07 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:07 np0005590528 podman[92609]: 2026-01-21 13:46:07.640140733 +0000 UTC m=+0.111067943 container init 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:07 np0005590528 podman[92609]: 2026-01-21 13:46:07.648512195 +0000 UTC m=+0.119439375 container start 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 21 08:46:07 np0005590528 podman[92609]: 2026-01-21 13:46:07.556489374 +0000 UTC m=+0.027416574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:07 np0005590528 infallible_wright[92643]: 167 167
Jan 21 08:46:07 np0005590528 systemd[1]: libpod-7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44.scope: Deactivated successfully.
Jan 21 08:46:07 np0005590528 conmon[92643]: conmon 7a66a71644e3356b1e80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44.scope/container/memory.events
Jan 21 08:46:07 np0005590528 podman[92609]: 2026-01-21 13:46:07.656016646 +0000 UTC m=+0.126943826 container attach 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 08:46:07 np0005590528 podman[92609]: 2026-01-21 13:46:07.656471657 +0000 UTC m=+0.127398837 container died 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 08:46:07 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d770cfef0926bba36371ee8250bdc64a654be666cb21ef44dcab2c2759398cfc-merged.mount: Deactivated successfully.
Jan 21 08:46:07 np0005590528 podman[92609]: 2026-01-21 13:46:07.841903943 +0000 UTC m=+0.312831143 container remove 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:07 np0005590528 systemd[1]: libpod-conmon-7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44.scope: Deactivated successfully.
Jan 21 08:46:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:46:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 21 08:46:07 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0[75027]: 2026-01-21T13:46:07.954+0000 7f821bd3a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e2 new map
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-01-21T13:46:07:955917+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T13:46:07.955594+0000#012modified#0112026-01-21T13:46:07.955594+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 21 08:46:07 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 21 08:46:07 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 08:46:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 21 08:46:08 np0005590528 systemd[1]: libpod-bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b.scope: Deactivated successfully.
Jan 21 08:46:08 np0005590528 podman[92529]: 2026-01-21 13:46:08.003388571 +0000 UTC m=+0.974918914 container died bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 08:46:08 np0005590528 podman[92670]: 2026-01-21 13:46:08.016895077 +0000 UTC m=+0.050166032 container create 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:08 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ba801e7969df5b75e9f9018e0a09090fc7b10a7cc738d42256afe03288fd222f-merged.mount: Deactivated successfully.
Jan 21 08:46:08 np0005590528 podman[92529]: 2026-01-21 13:46:08.043379707 +0000 UTC m=+1.014910070 container remove bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 08:46:08 np0005590528 podman[92670]: 2026-01-21 13:46:07.99130356 +0000 UTC m=+0.024574515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:08 np0005590528 systemd[1]: Started libpod-conmon-6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c.scope.
Jan 21 08:46:08 np0005590528 systemd[1]: libpod-conmon-bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b.scope: Deactivated successfully.
Jan 21 08:46:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 21 08:46:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 21 08:46:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 21 08:46:08 np0005590528 ceph-mon[75031]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 08:46:08 np0005590528 ceph-mon[75031]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 21 08:46:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 21 08:46:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:08 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898d7e15662df17694a71c15377a33441c5ac7dc532a6636ea4178c74441fd24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898d7e15662df17694a71c15377a33441c5ac7dc532a6636ea4178c74441fd24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898d7e15662df17694a71c15377a33441c5ac7dc532a6636ea4178c74441fd24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898d7e15662df17694a71c15377a33441c5ac7dc532a6636ea4178c74441fd24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:08 np0005590528 podman[92670]: 2026-01-21 13:46:08.127213781 +0000 UTC m=+0.160484726 container init 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:08 np0005590528 podman[92670]: 2026-01-21 13:46:08.141933285 +0000 UTC m=+0.175204280 container start 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 08:46:08 np0005590528 podman[92670]: 2026-01-21 13:46:08.14664262 +0000 UTC m=+0.179913605 container attach 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 08:46:08 np0005590528 python3[92729]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:08 np0005590528 brave_murdock[92699]: {
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:    "0": [
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:        {
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "devices": [
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "/dev/loop3"
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            ],
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_name": "ceph_lv0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_size": "21470642176",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "name": "ceph_lv0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "tags": {
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.crush_device_class": "",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.encrypted": "0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.osd_id": "0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.type": "block",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.vdo": "0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.with_tpm": "0"
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            },
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "type": "block",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "vg_name": "ceph_vg0"
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:        }
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:    ],
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:    "1": [
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:        {
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "devices": [
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "/dev/loop4"
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            ],
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_name": "ceph_lv1",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_size": "21470642176",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "name": "ceph_lv1",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "tags": {
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.crush_device_class": "",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.encrypted": "0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.osd_id": "1",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.type": "block",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.vdo": "0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.with_tpm": "0"
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            },
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "type": "block",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "vg_name": "ceph_vg1"
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:        }
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:    ],
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:    "2": [
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:        {
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "devices": [
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "/dev/loop5"
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            ],
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_name": "ceph_lv2",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_size": "21470642176",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "name": "ceph_lv2",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "tags": {
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.crush_device_class": "",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.encrypted": "0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.osd_id": "2",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.type": "block",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.vdo": "0",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:                "ceph.with_tpm": "0"
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            },
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "type": "block",
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:            "vg_name": "ceph_vg2"
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:        }
Jan 21 08:46:08 np0005590528 brave_murdock[92699]:    ]
Jan 21 08:46:08 np0005590528 brave_murdock[92699]: }
Jan 21 08:46:08 np0005590528 podman[92734]: 2026-01-21 13:46:08.445722579 +0000 UTC m=+0.051385582 container create 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 08:46:08 np0005590528 systemd[1]: libpod-6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c.scope: Deactivated successfully.
Jan 21 08:46:08 np0005590528 podman[92670]: 2026-01-21 13:46:08.463255702 +0000 UTC m=+0.496526687 container died 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:08 np0005590528 systemd[1]: Started libpod-conmon-65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c.scope.
Jan 21 08:46:08 np0005590528 systemd[1]: var-lib-containers-storage-overlay-898d7e15662df17694a71c15377a33441c5ac7dc532a6636ea4178c74441fd24-merged.mount: Deactivated successfully.
Jan 21 08:46:08 np0005590528 podman[92734]: 2026-01-21 13:46:08.421041812 +0000 UTC m=+0.026704825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:08 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:08 np0005590528 podman[92670]: 2026-01-21 13:46:08.520057503 +0000 UTC m=+0.553328458 container remove 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebe12efe6b10901420571dc7a7ba643b04748d1e5f57e0898b4cd16b156f614/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebe12efe6b10901420571dc7a7ba643b04748d1e5f57e0898b4cd16b156f614/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebe12efe6b10901420571dc7a7ba643b04748d1e5f57e0898b4cd16b156f614/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:08 np0005590528 systemd[1]: libpod-conmon-6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c.scope: Deactivated successfully.
Jan 21 08:46:08 np0005590528 podman[92734]: 2026-01-21 13:46:08.549521544 +0000 UTC m=+0.155184557 container init 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:08 np0005590528 podman[92734]: 2026-01-21 13:46:08.558593504 +0000 UTC m=+0.164256477 container start 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 08:46:08 np0005590528 podman[92734]: 2026-01-21 13:46:08.562149799 +0000 UTC m=+0.167812772 container attach 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 21 08:46:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 08:46:08 np0005590528 ceph-mgr[75322]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 21 08:46:08 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 21 08:46:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 08:46:08 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:09 np0005590528 gallant_yonath[92758]: Scheduled mds.cephfs update...
Jan 21 08:46:09 np0005590528 systemd[1]: libpod-65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c.scope: Deactivated successfully.
Jan 21 08:46:09 np0005590528 podman[92734]: 2026-01-21 13:46:09.020408831 +0000 UTC m=+0.626071794 container died 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:09 np0005590528 podman[92851]: 2026-01-21 13:46:09.031657313 +0000 UTC m=+0.048042352 container create 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:09 np0005590528 systemd[1]: var-lib-containers-storage-overlay-cebe12efe6b10901420571dc7a7ba643b04748d1e5f57e0898b4cd16b156f614-merged.mount: Deactivated successfully.
Jan 21 08:46:09 np0005590528 podman[92734]: 2026-01-21 13:46:09.062183519 +0000 UTC m=+0.667846482 container remove 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 08:46:09 np0005590528 systemd[1]: Started libpod-conmon-175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff.scope.
Jan 21 08:46:09 np0005590528 systemd[1]: libpod-conmon-65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c.scope: Deactivated successfully.
Jan 21 08:46:09 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:09 np0005590528 podman[92851]: 2026-01-21 13:46:09.091037606 +0000 UTC m=+0.107422655 container init 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:09 np0005590528 podman[92851]: 2026-01-21 13:46:09.096910217 +0000 UTC m=+0.113295256 container start 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 08:46:09 np0005590528 lucid_lewin[92882]: 167 167
Jan 21 08:46:09 np0005590528 systemd[1]: libpod-175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff.scope: Deactivated successfully.
Jan 21 08:46:09 np0005590528 podman[92851]: 2026-01-21 13:46:09.10031027 +0000 UTC m=+0.116695409 container attach 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 08:46:09 np0005590528 podman[92851]: 2026-01-21 13:46:09.100877283 +0000 UTC m=+0.117262322 container died 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:09 np0005590528 ceph-mon[75031]: Saving service mds.cephfs spec with placement compute-0
Jan 21 08:46:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:09 np0005590528 podman[92851]: 2026-01-21 13:46:09.01248138 +0000 UTC m=+0.028866459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:09 np0005590528 systemd[1]: var-lib-containers-storage-overlay-f65a3f83f9185fc129f2a72939a8b2e88253125228d2fafa05c897a8d396399c-merged.mount: Deactivated successfully.
Jan 21 08:46:09 np0005590528 podman[92851]: 2026-01-21 13:46:09.140489189 +0000 UTC m=+0.156874228 container remove 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:09 np0005590528 systemd[1]: libpod-conmon-175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff.scope: Deactivated successfully.
Jan 21 08:46:09 np0005590528 podman[92909]: 2026-01-21 13:46:09.339902453 +0000 UTC m=+0.068082575 container create 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:09 np0005590528 systemd[1]: Started libpod-conmon-64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8.scope.
Jan 21 08:46:09 np0005590528 podman[92909]: 2026-01-21 13:46:09.316121459 +0000 UTC m=+0.044301601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:09 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:09 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d4bd3677c1f9f3c7f897e971a10835501d3f1b9e273569f5b15111f6ed5ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:09 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d4bd3677c1f9f3c7f897e971a10835501d3f1b9e273569f5b15111f6ed5ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:09 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d4bd3677c1f9f3c7f897e971a10835501d3f1b9e273569f5b15111f6ed5ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:09 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d4bd3677c1f9f3c7f897e971a10835501d3f1b9e273569f5b15111f6ed5ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:09 np0005590528 podman[92909]: 2026-01-21 13:46:09.438276988 +0000 UTC m=+0.166457120 container init 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:09 np0005590528 podman[92909]: 2026-01-21 13:46:09.451999939 +0000 UTC m=+0.180180051 container start 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:09 np0005590528 podman[92909]: 2026-01-21 13:46:09.455584645 +0000 UTC m=+0.183764757 container attach 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:10 np0005590528 ceph-mon[75031]: Saving service mds.cephfs spec with placement compute-0
Jan 21 08:46:10 np0005590528 lvm[93056]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:46:10 np0005590528 lvm[93057]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:46:10 np0005590528 lvm[93057]: VG ceph_vg1 finished
Jan 21 08:46:10 np0005590528 lvm[93056]: VG ceph_vg0 finished
Jan 21 08:46:10 np0005590528 lvm[93077]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:46:10 np0005590528 lvm[93077]: VG ceph_vg2 finished
Jan 21 08:46:10 np0005590528 compassionate_feynman[92926]: {}
Jan 21 08:46:10 np0005590528 systemd[1]: libpod-64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8.scope: Deactivated successfully.
Jan 21 08:46:10 np0005590528 podman[92909]: 2026-01-21 13:46:10.354239518 +0000 UTC m=+1.082419640 container died 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:10 np0005590528 systemd[1]: libpod-64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8.scope: Consumed 1.405s CPU time.
Jan 21 08:46:10 np0005590528 python3[93085]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 08:46:10 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c85d4bd3677c1f9f3c7f897e971a10835501d3f1b9e273569f5b15111f6ed5ce-merged.mount: Deactivated successfully.
Jan 21 08:46:10 np0005590528 podman[92909]: 2026-01-21 13:46:10.406778466 +0000 UTC m=+1.134958588 container remove 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:10 np0005590528 systemd[1]: libpod-conmon-64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8.scope: Deactivated successfully.
Jan 21 08:46:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:10 np0005590528 python3[93231]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003170.0899923-36705-144937932674893/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=01672c665cebe1978e709c2eff9d48fb31c7992e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:46:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:46:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:46:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:46:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:46:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:46:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:46:11 np0005590528 podman[93314]: 2026-01-21 13:46:11.07020195 +0000 UTC m=+0.073425573 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:11 np0005590528 podman[93314]: 2026-01-21 13:46:11.190198696 +0000 UTC m=+0.193422299 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:11 np0005590528 python3[93359]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:11 np0005590528 podman[93384]: 2026-01-21 13:46:11.31546042 +0000 UTC m=+0.044842553 container create 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 08:46:11 np0005590528 systemd[1]: Started libpod-conmon-126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd.scope.
Jan 21 08:46:11 np0005590528 podman[93384]: 2026-01-21 13:46:11.292461275 +0000 UTC m=+0.021843438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:11 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:11 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb64b7913aac57991956283d775783ad55d15e9e8f6c688974b7349644d163c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:11 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb64b7913aac57991956283d775783ad55d15e9e8f6c688974b7349644d163c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:11 np0005590528 podman[93384]: 2026-01-21 13:46:11.405844362 +0000 UTC m=+0.135226525 container init 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:11 np0005590528 podman[93384]: 2026-01-21 13:46:11.413334473 +0000 UTC m=+0.142716606 container start 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 08:46:11 np0005590528 podman[93384]: 2026-01-21 13:46:11.416614872 +0000 UTC m=+0.145997005 container attach 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 08:46:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1487634279' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1487634279' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:46:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:46:11 np0005590528 systemd[1]: libpod-126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd.scope: Deactivated successfully.
Jan 21 08:46:11 np0005590528 podman[93384]: 2026-01-21 13:46:11.987740018 +0000 UTC m=+0.717122161 container died 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:12 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3bb64b7913aac57991956283d775783ad55d15e9e8f6c688974b7349644d163c-merged.mount: Deactivated successfully.
Jan 21 08:46:12 np0005590528 podman[93384]: 2026-01-21 13:46:12.396981126 +0000 UTC m=+1.126363259 container remove 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:12 np0005590528 systemd[1]: libpod-conmon-126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd.scope: Deactivated successfully.
Jan 21 08:46:12 np0005590528 podman[93599]: 2026-01-21 13:46:12.485215096 +0000 UTC m=+0.058918713 container create d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:12 np0005590528 systemd[1]: Started libpod-conmon-d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41.scope.
Jan 21 08:46:12 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:12 np0005590528 podman[93599]: 2026-01-21 13:46:12.464303571 +0000 UTC m=+0.038007238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:12 np0005590528 podman[93599]: 2026-01-21 13:46:12.861671373 +0000 UTC m=+0.435375030 container init d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 08:46:12 np0005590528 podman[93599]: 2026-01-21 13:46:12.872869023 +0000 UTC m=+0.446572630 container start d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 08:46:12 np0005590528 podman[93599]: 2026-01-21 13:46:12.876612214 +0000 UTC m=+0.450315821 container attach d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:12 np0005590528 quizzical_mcnulty[93615]: 167 167
Jan 21 08:46:12 np0005590528 systemd[1]: libpod-d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41.scope: Deactivated successfully.
Jan 21 08:46:12 np0005590528 podman[93599]: 2026-01-21 13:46:12.87809957 +0000 UTC m=+0.451803177 container died d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Jan 21 08:46:12 np0005590528 systemd[1]: var-lib-containers-storage-overlay-cff3ee6ba9fc1202db121b3cad2274f64e0dd4fdeea512778a37277f33ee4d79-merged.mount: Deactivated successfully.
Jan 21 08:46:12 np0005590528 podman[93599]: 2026-01-21 13:46:12.918439734 +0000 UTC m=+0.492143351 container remove d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 08:46:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:12 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1487634279' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 21 08:46:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:12 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/1487634279' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 21 08:46:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:46:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:46:12 np0005590528 systemd[1]: libpod-conmon-d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41.scope: Deactivated successfully.
Jan 21 08:46:13 np0005590528 podman[93639]: 2026-01-21 13:46:13.054830765 +0000 UTC m=+0.021380006 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:13 np0005590528 podman[93639]: 2026-01-21 13:46:13.173707385 +0000 UTC m=+0.140256606 container create 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 08:46:13 np0005590528 systemd[1]: Started libpod-conmon-3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4.scope.
Jan 21 08:46:13 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:13 np0005590528 python3[93678]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:13 np0005590528 podman[93639]: 2026-01-21 13:46:13.28328344 +0000 UTC m=+0.249832741 container init 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:13 np0005590528 podman[93639]: 2026-01-21 13:46:13.292154515 +0000 UTC m=+0.258703736 container start 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 08:46:13 np0005590528 podman[93639]: 2026-01-21 13:46:13.295538426 +0000 UTC m=+0.262087687 container attach 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:13 np0005590528 podman[93687]: 2026-01-21 13:46:13.320353245 +0000 UTC m=+0.043444570 container create 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:13 np0005590528 systemd[1]: Started libpod-conmon-4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a.scope.
Jan 21 08:46:13 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def8c6eadf554d69743d2d159c0eb5b191a552cb3e985fcb6fab4ef96037ab94/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def8c6eadf554d69743d2d159c0eb5b191a552cb3e985fcb6fab4ef96037ab94/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:13 np0005590528 podman[93687]: 2026-01-21 13:46:13.375787343 +0000 UTC m=+0.098878668 container init 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:13 np0005590528 podman[93687]: 2026-01-21 13:46:13.383816767 +0000 UTC m=+0.106908092 container start 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:13 np0005590528 podman[93687]: 2026-01-21 13:46:13.386931472 +0000 UTC m=+0.110022797 container attach 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 08:46:13 np0005590528 podman[93687]: 2026-01-21 13:46:13.304056531 +0000 UTC m=+0.027147886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:13 np0005590528 amazing_noyce[93683]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:46:13 np0005590528 amazing_noyce[93683]: --> All data devices are unavailable
Jan 21 08:46:13 np0005590528 systemd[1]: libpod-3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4.scope: Deactivated successfully.
Jan 21 08:46:13 np0005590528 podman[93743]: 2026-01-21 13:46:13.796396056 +0000 UTC m=+0.024498383 container died 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 08:46:13 np0005590528 systemd[1]: var-lib-containers-storage-overlay-312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db-merged.mount: Deactivated successfully.
Jan 21 08:46:13 np0005590528 podman[93743]: 2026-01-21 13:46:13.838650216 +0000 UTC m=+0.066752533 container remove 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 08:46:13 np0005590528 systemd[1]: libpod-conmon-3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4.scope: Deactivated successfully.
Jan 21 08:46:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 08:46:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263255916' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 08:46:13 np0005590528 hardcore_clarke[93705]: 
Jan 21 08:46:13 np0005590528 hardcore_clarke[93705]: {"fsid":"2f0e9cad-f0a3-5869-9cc3-8d84d071866a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":112,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1769003143,"num_in_osds":3,"osd_in_since":1769003119,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83943424,"bytes_avail":64327983104,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-21T13:46:07:955917+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-21T13:45:41.522372+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 21 08:46:13 np0005590528 systemd[1]: libpod-4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a.scope: Deactivated successfully.
Jan 21 08:46:13 np0005590528 podman[93687]: 2026-01-21 13:46:13.923767451 +0000 UTC m=+0.646858816 container died 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 08:46:13 np0005590528 systemd[1]: var-lib-containers-storage-overlay-def8c6eadf554d69743d2d159c0eb5b191a552cb3e985fcb6fab4ef96037ab94-merged.mount: Deactivated successfully.
Jan 21 08:46:13 np0005590528 podman[93687]: 2026-01-21 13:46:13.972841805 +0000 UTC m=+0.695933130 container remove 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:13 np0005590528 systemd[1]: libpod-conmon-4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a.scope: Deactivated successfully.
Jan 21 08:46:14 np0005590528 python3[93845]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:14 np0005590528 podman[93858]: 2026-01-21 13:46:14.30614258 +0000 UTC m=+0.038761977 container create 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:14 np0005590528 podman[93860]: 2026-01-21 13:46:14.322834323 +0000 UTC m=+0.044685459 container create e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 08:46:14 np0005590528 systemd[1]: Started libpod-conmon-7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f.scope.
Jan 21 08:46:14 np0005590528 systemd[1]: Started libpod-conmon-e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3.scope.
Jan 21 08:46:14 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba4c65322ce91e4fbb2a816812d74c672dec6deb04b0f3bb84ad8bb74155452/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba4c65322ce91e4fbb2a816812d74c672dec6deb04b0f3bb84ad8bb74155452/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:14 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:14 np0005590528 podman[93858]: 2026-01-21 13:46:14.381772916 +0000 UTC m=+0.114392333 container init 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 08:46:14 np0005590528 podman[93858]: 2026-01-21 13:46:14.288344191 +0000 UTC m=+0.020963608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:14 np0005590528 podman[93858]: 2026-01-21 13:46:14.388521789 +0000 UTC m=+0.121141186 container start 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:46:14 np0005590528 podman[93860]: 2026-01-21 13:46:14.391301556 +0000 UTC m=+0.113152702 container init e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:14 np0005590528 podman[93858]: 2026-01-21 13:46:14.39519832 +0000 UTC m=+0.127817717 container attach 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 08:46:14 np0005590528 podman[93860]: 2026-01-21 13:46:14.396310907 +0000 UTC m=+0.118162043 container start e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 08:46:14 np0005590528 podman[93860]: 2026-01-21 13:46:14.300930954 +0000 UTC m=+0.022782100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:14 np0005590528 systemd[1]: libpod-e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3.scope: Deactivated successfully.
Jan 21 08:46:14 np0005590528 podman[93860]: 2026-01-21 13:46:14.400139459 +0000 UTC m=+0.121990615 container attach e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 08:46:14 np0005590528 confident_greider[93892]: 167 167
Jan 21 08:46:14 np0005590528 conmon[93892]: conmon e78d0ef3776de640a1fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3.scope/container/memory.events
Jan 21 08:46:14 np0005590528 podman[93860]: 2026-01-21 13:46:14.401127533 +0000 UTC m=+0.122978669 container died e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 08:46:14 np0005590528 systemd[1]: var-lib-containers-storage-overlay-cc06cd2250586836eadeac001f62cf88c17182fc962171ec81fb3d41b499b092-merged.mount: Deactivated successfully.
Jan 21 08:46:14 np0005590528 podman[93860]: 2026-01-21 13:46:14.433682279 +0000 UTC m=+0.155533405 container remove e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:14 np0005590528 systemd[1]: libpod-conmon-e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3.scope: Deactivated successfully.
Jan 21 08:46:14 np0005590528 podman[93935]: 2026-01-21 13:46:14.600207309 +0000 UTC m=+0.045842908 container create 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 08:46:14 np0005590528 systemd[1]: Started libpod-conmon-1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d.scope.
Jan 21 08:46:14 np0005590528 podman[93935]: 2026-01-21 13:46:14.581156129 +0000 UTC m=+0.026791758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:14 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6824ec5ecbbafef523fab5ee55a060c3f0c721a419ae9d565d2269a597ba2fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6824ec5ecbbafef523fab5ee55a060c3f0c721a419ae9d565d2269a597ba2fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6824ec5ecbbafef523fab5ee55a060c3f0c721a419ae9d565d2269a597ba2fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6824ec5ecbbafef523fab5ee55a060c3f0c721a419ae9d565d2269a597ba2fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:14 np0005590528 podman[93935]: 2026-01-21 13:46:14.718720949 +0000 UTC m=+0.164356558 container init 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:14 np0005590528 podman[93935]: 2026-01-21 13:46:14.730507934 +0000 UTC m=+0.176143523 container start 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 08:46:14 np0005590528 podman[93935]: 2026-01-21 13:46:14.734604353 +0000 UTC m=+0.180239992 container attach 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 21 08:46:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 08:46:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2177793343' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 08:46:14 np0005590528 hopeful_joliot[93890]: 
Jan 21 08:46:14 np0005590528 hopeful_joliot[93890]: {"epoch":1,"fsid":"2f0e9cad-f0a3-5869-9cc3-8d84d071866a","modified":"2026-01-21T13:44:16.665097Z","created":"2026-01-21T13:44:16.665097Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 21 08:46:14 np0005590528 hopeful_joliot[93890]: dumped monmap epoch 1
Jan 21 08:46:14 np0005590528 systemd[1]: libpod-7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f.scope: Deactivated successfully.
Jan 21 08:46:14 np0005590528 podman[93858]: 2026-01-21 13:46:14.894012051 +0000 UTC m=+0.626631458 container died 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:14 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3ba4c65322ce91e4fbb2a816812d74c672dec6deb04b0f3bb84ad8bb74155452-merged.mount: Deactivated successfully.
Jan 21 08:46:14 np0005590528 podman[93858]: 2026-01-21 13:46:14.935302687 +0000 UTC m=+0.667922084 container remove 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 08:46:14 np0005590528 systemd[1]: libpod-conmon-7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f.scope: Deactivated successfully.
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]: {
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:    "0": [
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:        {
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "devices": [
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "/dev/loop3"
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            ],
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_name": "ceph_lv0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_size": "21470642176",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "name": "ceph_lv0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "tags": {
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.crush_device_class": "",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.encrypted": "0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.osd_id": "0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.type": "block",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.vdo": "0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.with_tpm": "0"
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            },
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "type": "block",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "vg_name": "ceph_vg0"
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:        }
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:    ],
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:    "1": [
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:        {
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "devices": [
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "/dev/loop4"
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            ],
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_name": "ceph_lv1",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_size": "21470642176",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "name": "ceph_lv1",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "tags": {
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.crush_device_class": "",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.encrypted": "0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.osd_id": "1",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.type": "block",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.vdo": "0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.with_tpm": "0"
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            },
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "type": "block",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "vg_name": "ceph_vg1"
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:        }
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:    ],
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:    "2": [
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:        {
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "devices": [
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "/dev/loop5"
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            ],
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_name": "ceph_lv2",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_size": "21470642176",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "name": "ceph_lv2",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "tags": {
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.crush_device_class": "",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.encrypted": "0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.osd_id": "2",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.type": "block",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.vdo": "0",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:                "ceph.with_tpm": "0"
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            },
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "type": "block",
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:            "vg_name": "ceph_vg2"
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:        }
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]:    ]
Jan 21 08:46:15 np0005590528 admiring_williamson[93952]: }
Jan 21 08:46:15 np0005590528 systemd[1]: libpod-1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d.scope: Deactivated successfully.
Jan 21 08:46:15 np0005590528 podman[93976]: 2026-01-21 13:46:15.157209684 +0000 UTC m=+0.045553771 container died 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:15 np0005590528 systemd[1]: var-lib-containers-storage-overlay-a6824ec5ecbbafef523fab5ee55a060c3f0c721a419ae9d565d2269a597ba2fb-merged.mount: Deactivated successfully.
Jan 21 08:46:15 np0005590528 podman[93976]: 2026-01-21 13:46:15.214328502 +0000 UTC m=+0.102672519 container remove 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 08:46:15 np0005590528 systemd[1]: libpod-conmon-1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d.scope: Deactivated successfully.
Jan 21 08:46:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:15 np0005590528 python3[94066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:15 np0005590528 podman[94067]: 2026-01-21 13:46:15.628053359 +0000 UTC m=+0.055819338 container create feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:15 np0005590528 systemd[1]: Started libpod-conmon-feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3.scope.
Jan 21 08:46:15 np0005590528 podman[94067]: 2026-01-21 13:46:15.609528543 +0000 UTC m=+0.037294532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:15 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:15 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657857baff19e45638555ec52ccb121b04459bb9d853262e165174a2c5ada54f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:15 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657857baff19e45638555ec52ccb121b04459bb9d853262e165174a2c5ada54f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:15 np0005590528 podman[94095]: 2026-01-21 13:46:15.736659641 +0000 UTC m=+0.059758243 container create 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 08:46:15 np0005590528 podman[94067]: 2026-01-21 13:46:15.741972299 +0000 UTC m=+0.169738298 container init feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:15 np0005590528 podman[94067]: 2026-01-21 13:46:15.748006605 +0000 UTC m=+0.175772604 container start feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 08:46:15 np0005590528 podman[94067]: 2026-01-21 13:46:15.754756677 +0000 UTC m=+0.182522686 container attach feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:15 np0005590528 systemd[1]: Started libpod-conmon-5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af.scope.
Jan 21 08:46:15 np0005590528 podman[94095]: 2026-01-21 13:46:15.707723973 +0000 UTC m=+0.030822635 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:15 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:15 np0005590528 podman[94095]: 2026-01-21 13:46:15.831431508 +0000 UTC m=+0.154530170 container init 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:15 np0005590528 podman[94095]: 2026-01-21 13:46:15.837990797 +0000 UTC m=+0.161089399 container start 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 21 08:46:15 np0005590528 podman[94095]: 2026-01-21 13:46:15.842394763 +0000 UTC m=+0.165493335 container attach 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 08:46:15 np0005590528 sharp_borg[94114]: 167 167
Jan 21 08:46:15 np0005590528 systemd[1]: libpod-5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af.scope: Deactivated successfully.
Jan 21 08:46:15 np0005590528 podman[94095]: 2026-01-21 13:46:15.843230554 +0000 UTC m=+0.166329126 container died 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:15 np0005590528 systemd[1]: var-lib-containers-storage-overlay-faf1ff166ea645835e67635df657787ef08fe5945854fc00dd993a9ede40a20d-merged.mount: Deactivated successfully.
Jan 21 08:46:15 np0005590528 podman[94095]: 2026-01-21 13:46:15.89653564 +0000 UTC m=+0.219634242 container remove 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:15 np0005590528 systemd[1]: libpod-conmon-5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af.scope: Deactivated successfully.
Jan 21 08:46:16 np0005590528 podman[94155]: 2026-01-21 13:46:16.075941501 +0000 UTC m=+0.038578873 container create 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:16 np0005590528 systemd[1]: Started libpod-conmon-8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556.scope.
Jan 21 08:46:16 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb296f86d45de336e04a11bea6e0421906130a37ee2de7c6debe126f9e0ac16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb296f86d45de336e04a11bea6e0421906130a37ee2de7c6debe126f9e0ac16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb296f86d45de336e04a11bea6e0421906130a37ee2de7c6debe126f9e0ac16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb296f86d45de336e04a11bea6e0421906130a37ee2de7c6debe126f9e0ac16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:16 np0005590528 podman[94155]: 2026-01-21 13:46:16.058039359 +0000 UTC m=+0.020676721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:16 np0005590528 podman[94155]: 2026-01-21 13:46:16.157718035 +0000 UTC m=+0.120355427 container init 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:16 np0005590528 podman[94155]: 2026-01-21 13:46:16.163982056 +0000 UTC m=+0.126619388 container start 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:16 np0005590528 podman[94155]: 2026-01-21 13:46:16.167350347 +0000 UTC m=+0.129987739 container attach 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 21 08:46:16 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2176148221' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 21 08:46:16 np0005590528 gracious_grothendieck[94102]: [client.openstack]
Jan 21 08:46:16 np0005590528 gracious_grothendieck[94102]: #011key = AQAK2HBpAAAAABAAhSWZ4orU8dfgZu1d3brE9g==
Jan 21 08:46:16 np0005590528 gracious_grothendieck[94102]: #011caps mgr = "allow *"
Jan 21 08:46:16 np0005590528 gracious_grothendieck[94102]: #011caps mon = "profile rbd"
Jan 21 08:46:16 np0005590528 gracious_grothendieck[94102]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 21 08:46:16 np0005590528 systemd[1]: libpod-feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3.scope: Deactivated successfully.
Jan 21 08:46:16 np0005590528 podman[94067]: 2026-01-21 13:46:16.320083564 +0000 UTC m=+0.747849583 container died feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:16 np0005590528 systemd[1]: var-lib-containers-storage-overlay-657857baff19e45638555ec52ccb121b04459bb9d853262e165174a2c5ada54f-merged.mount: Deactivated successfully.
Jan 21 08:46:16 np0005590528 podman[94067]: 2026-01-21 13:46:16.406313366 +0000 UTC m=+0.834079355 container remove feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:16 np0005590528 systemd[1]: libpod-conmon-feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3.scope: Deactivated successfully.
Jan 21 08:46:16 np0005590528 lvm[94260]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:46:16 np0005590528 lvm[94260]: VG ceph_vg0 finished
Jan 21 08:46:16 np0005590528 lvm[94263]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:46:16 np0005590528 lvm[94263]: VG ceph_vg1 finished
Jan 21 08:46:16 np0005590528 lvm[94265]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:46:16 np0005590528 lvm[94265]: VG ceph_vg2 finished
Jan 21 08:46:16 np0005590528 lvm[94267]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:46:16 np0005590528 lvm[94267]: VG ceph_vg1 finished
Jan 21 08:46:16 np0005590528 lvm[94266]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:46:16 np0005590528 lvm[94266]: VG ceph_vg0 finished
Jan 21 08:46:16 np0005590528 fervent_goldstine[94172]: {}
Jan 21 08:46:16 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/2176148221' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 21 08:46:16 np0005590528 systemd[1]: libpod-8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556.scope: Deactivated successfully.
Jan 21 08:46:16 np0005590528 podman[94155]: 2026-01-21 13:46:16.962168603 +0000 UTC m=+0.924805935 container died 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:16 np0005590528 systemd[1]: libpod-8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556.scope: Consumed 1.259s CPU time.
Jan 21 08:46:16 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ebb296f86d45de336e04a11bea6e0421906130a37ee2de7c6debe126f9e0ac16-merged.mount: Deactivated successfully.
Jan 21 08:46:17 np0005590528 podman[94155]: 2026-01-21 13:46:17.004777021 +0000 UTC m=+0.967414363 container remove 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 21 08:46:17 np0005590528 systemd[1]: libpod-conmon-8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556.scope: Deactivated successfully.
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:17 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev 810b50ce-cedb-4e21-ae94-a106b4334385 (Updating rgw.rgw deployment (+1 -> 1))
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xeytxr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xeytxr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xeytxr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:46:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:46:17 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.xeytxr on compute-0
Jan 21 08:46:17 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.xeytxr on compute-0
Jan 21 08:46:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:17 np0005590528 podman[94394]: 2026-01-21 13:46:17.592001697 +0000 UTC m=+0.042579140 container create 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:17 np0005590528 systemd[1]: Started libpod-conmon-3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed.scope.
Jan 21 08:46:17 np0005590528 podman[94394]: 2026-01-21 13:46:17.572027124 +0000 UTC m=+0.022604607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:17 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:17 np0005590528 podman[94394]: 2026-01-21 13:46:17.689133641 +0000 UTC m=+0.139711124 container init 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:17 np0005590528 podman[94394]: 2026-01-21 13:46:17.698679022 +0000 UTC m=+0.149256465 container start 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:17 np0005590528 podman[94394]: 2026-01-21 13:46:17.702882533 +0000 UTC m=+0.153460026 container attach 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 08:46:17 np0005590528 upbeat_black[94461]: 167 167
Jan 21 08:46:17 np0005590528 systemd[1]: libpod-3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed.scope: Deactivated successfully.
Jan 21 08:46:17 np0005590528 conmon[94461]: conmon 3cd82e5d48a51cadd61a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed.scope/container/memory.events
Jan 21 08:46:17 np0005590528 podman[94394]: 2026-01-21 13:46:17.707105795 +0000 UTC m=+0.157683248 container died 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 21 08:46:17 np0005590528 systemd[1]: var-lib-containers-storage-overlay-512e993b3557da152ba8bc8a693b59551510d175bf7953e6a3940d13c887afa5-merged.mount: Deactivated successfully.
Jan 21 08:46:17 np0005590528 podman[94394]: 2026-01-21 13:46:17.759850138 +0000 UTC m=+0.210427581 container remove 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:17 np0005590528 systemd[1]: libpod-conmon-3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed.scope: Deactivated successfully.
Jan 21 08:46:17 np0005590528 systemd[1]: Reloading.
Jan 21 08:46:17 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:46:17 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xeytxr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xeytxr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: Deploying daemon rgw.rgw.compute-0.xeytxr on compute-0
Jan 21 08:46:18 np0005590528 systemd[1]: Reloading.
Jan 21 08:46:18 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:46:18 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:46:18 np0005590528 systemd[1]: Starting Ceph rgw.rgw.compute-0.xeytxr for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:46:18 np0005590528 ansible-async_wrapper.py[94627]: Invoked with j61921864083 30 /home/zuul/.ansible/tmp/ansible-tmp-1769003177.5259786-36777-205437609874121/AnsiballZ_command.py _
Jan 21 08:46:18 np0005590528 ansible-async_wrapper.py[94657]: Starting module and watcher
Jan 21 08:46:18 np0005590528 ansible-async_wrapper.py[94657]: Start watching 94658 (30)
Jan 21 08:46:18 np0005590528 ansible-async_wrapper.py[94658]: Start module (94658)
Jan 21 08:46:18 np0005590528 ansible-async_wrapper.py[94627]: Return async_wrapper task started.
Jan 21 08:46:18 np0005590528 podman[94682]: 2026-01-21 13:46:18.660450048 +0000 UTC m=+0.058735809 container create d95768cf4dac1ef056ea8ada597c056fa231007c2f44315123caa31cea263ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-rgw-rgw-compute-0-xeytxr, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:18 np0005590528 python3[94664]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e690b92cd67a3a9c590da30dab4048f43cea21933c6e5b0d2174a9f4e9e70ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e690b92cd67a3a9c590da30dab4048f43cea21933c6e5b0d2174a9f4e9e70ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e690b92cd67a3a9c590da30dab4048f43cea21933c6e5b0d2174a9f4e9e70ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e690b92cd67a3a9c590da30dab4048f43cea21933c6e5b0d2174a9f4e9e70ec/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.xeytxr supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:18 np0005590528 podman[94682]: 2026-01-21 13:46:18.721526131 +0000 UTC m=+0.119811932 container init d95768cf4dac1ef056ea8ada597c056fa231007c2f44315123caa31cea263ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-rgw-rgw-compute-0-xeytxr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 08:46:18 np0005590528 podman[94682]: 2026-01-21 13:46:18.633614079 +0000 UTC m=+0.031899840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:18 np0005590528 podman[94694]: 2026-01-21 13:46:18.726674756 +0000 UTC m=+0.046801931 container create 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 08:46:18 np0005590528 podman[94682]: 2026-01-21 13:46:18.731362929 +0000 UTC m=+0.129648690 container start d95768cf4dac1ef056ea8ada597c056fa231007c2f44315123caa31cea263ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-rgw-rgw-compute-0-xeytxr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 08:46:18 np0005590528 bash[94682]: d95768cf4dac1ef056ea8ada597c056fa231007c2f44315123caa31cea263ec8
Jan 21 08:46:18 np0005590528 systemd[1]: Started Ceph rgw.rgw.compute-0.xeytxr for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:46:18 np0005590528 systemd[1]: Started libpod-conmon-40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab.scope.
Jan 21 08:46:18 np0005590528 radosgw[94709]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 21 08:46:18 np0005590528 radosgw[94709]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 21 08:46:18 np0005590528 radosgw[94709]: framework: beast
Jan 21 08:46:18 np0005590528 radosgw[94709]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 21 08:46:18 np0005590528 radosgw[94709]: init_numa not setting numa affinity
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:18 np0005590528 podman[94694]: 2026-01-21 13:46:18.70696771 +0000 UTC m=+0.027094905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:18 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba673742d9ae554e2a26cb840ebc314f03b924ab136c66585b4c7234c066537f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba673742d9ae554e2a26cb840ebc314f03b924ab136c66585b4c7234c066537f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 08:46:18 np0005590528 podman[94694]: 2026-01-21 13:46:18.824061566 +0000 UTC m=+0.144188781 container init 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:18 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev 810b50ce-cedb-4e21-ae94-a106b4334385 (Updating rgw.rgw deployment (+1 -> 1))
Jan 21 08:46:18 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event 810b50ce-cedb-4e21-ae94-a106b4334385 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Jan 21 08:46:18 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 21 08:46:18 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 08:46:18 np0005590528 podman[94694]: 2026-01-21 13:46:18.830396919 +0000 UTC m=+0.150524104 container start 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 08:46:18 np0005590528 podman[94694]: 2026-01-21 13:46:18.834246553 +0000 UTC m=+0.154373728 container attach 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:18 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev 3d7f3518-2244-4e51-b382-2c2a8c5fe4f4 (Updating mds.cephfs deployment (+1 -> 1))
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ddixwa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ddixwa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ddixwa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:46:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:46:18 np0005590528 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ddixwa on compute-0
Jan 21 08:46:18 np0005590528 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ddixwa on compute-0
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ddixwa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ddixwa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 08:46:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 08:46:19 np0005590528 jovial_cartwright[94717]: 
Jan 21 08:46:19 np0005590528 jovial_cartwright[94717]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 08:46:19 np0005590528 systemd[1]: libpod-40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab.scope: Deactivated successfully.
Jan 21 08:46:19 np0005590528 podman[94855]: 2026-01-21 13:46:19.337952081 +0000 UTC m=+0.039606237 container create 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 21 08:46:19 np0005590528 podman[94868]: 2026-01-21 13:46:19.360984737 +0000 UTC m=+0.028322854 container died 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 08:46:19 np0005590528 systemd[1]: Started libpod-conmon-925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63.scope.
Jan 21 08:46:19 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ba673742d9ae554e2a26cb840ebc314f03b924ab136c66585b4c7234c066537f-merged.mount: Deactivated successfully.
Jan 21 08:46:19 np0005590528 podman[94868]: 2026-01-21 13:46:19.411504226 +0000 UTC m=+0.078842343 container remove 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 08:46:19 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:19 np0005590528 systemd[1]: libpod-conmon-40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab.scope: Deactivated successfully.
Jan 21 08:46:19 np0005590528 podman[94855]: 2026-01-21 13:46:19.320409748 +0000 UTC m=+0.022063924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:19 np0005590528 podman[94855]: 2026-01-21 13:46:19.427988495 +0000 UTC m=+0.129642671 container init 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 08:46:19 np0005590528 ansible-async_wrapper.py[94658]: Module complete (94658)
Jan 21 08:46:19 np0005590528 podman[94855]: 2026-01-21 13:46:19.434637885 +0000 UTC m=+0.136292041 container start 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 08:46:19 np0005590528 focused_mahavira[94889]: 167 167
Jan 21 08:46:19 np0005590528 systemd[1]: libpod-925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63.scope: Deactivated successfully.
Jan 21 08:46:19 np0005590528 podman[94855]: 2026-01-21 13:46:19.438236462 +0000 UTC m=+0.139890618 container attach 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:19 np0005590528 podman[94855]: 2026-01-21 13:46:19.438780395 +0000 UTC m=+0.140434551 container died 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:19 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e56b84b837c57d8006887fcf3538b0402e126d950dfc74e2a0aa5a3cf5008b50-merged.mount: Deactivated successfully.
Jan 21 08:46:19 np0005590528 podman[94855]: 2026-01-21 13:46:19.476855414 +0000 UTC m=+0.178509570 container remove 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:19 np0005590528 systemd[1]: libpod-conmon-925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63.scope: Deactivated successfully.
Jan 21 08:46:19 np0005590528 systemd[1]: Reloading.
Jan 21 08:46:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:19 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:46:19 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 21 08:46:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/854169589' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 21 08:46:19 np0005590528 systemd[1]: Reloading.
Jan 21 08:46:19 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:46:19 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:46:19 np0005590528 python3[94992]: ansible-ansible.legacy.async_status Invoked with jid=j61921864083.94627 mode=status _async_dir=/root/.ansible_async
Jan 21 08:46:20 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:20 np0005590528 systemd[1]: Starting Ceph mds.cephfs.compute-0.ddixwa for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 08:46:20 np0005590528 python3[95081]: ansible-ansible.legacy.async_status Invoked with jid=j61921864083.94627 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 08:46:20 np0005590528 podman[95128]: 2026-01-21 13:46:20.333142103 +0000 UTC m=+0.020842664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:20 np0005590528 python3[95166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:20 np0005590528 ceph-mgr[75322]: [progress INFO root] Writing back 4 completed events
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: Saving service rgw.rgw spec with placement compute-0
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: Deploying daemon mds.cephfs.compute-0.ddixwa on compute-0
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/854169589' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 21 08:46:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v76: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:21 np0005590528 podman[95128]: 2026-01-21 13:46:21.533783125 +0000 UTC m=+1.221483676 container create 380cea61fdd3e7c3770a41073ef15e8e1016252df6e767dd7931a2e6c1d30007 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mds-cephfs-compute-0-ddixwa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/854169589' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 21 08:46:21 np0005590528 ceph-mgr[75322]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 21 08:46:21 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 35 pg[8.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285da33500b33a392a5323e6f07db112a33bd592cfe956b23ecf335e4b1e4931/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285da33500b33a392a5323e6f07db112a33bd592cfe956b23ecf335e4b1e4931/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285da33500b33a392a5323e6f07db112a33bd592cfe956b23ecf335e4b1e4931/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285da33500b33a392a5323e6f07db112a33bd592cfe956b23ecf335e4b1e4931/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ddixwa supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:21 np0005590528 podman[95167]: 2026-01-21 13:46:21.629990398 +0000 UTC m=+0.694268850 container create d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:21 np0005590528 podman[95128]: 2026-01-21 13:46:21.635763797 +0000 UTC m=+1.323464368 container init 380cea61fdd3e7c3770a41073ef15e8e1016252df6e767dd7931a2e6c1d30007 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mds-cephfs-compute-0-ddixwa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:46:21 np0005590528 podman[95128]: 2026-01-21 13:46:21.643313279 +0000 UTC m=+1.331013820 container start 380cea61fdd3e7c3770a41073ef15e8e1016252df6e767dd7931a2e6c1d30007 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mds-cephfs-compute-0-ddixwa, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 21 08:46:21 np0005590528 bash[95128]: 380cea61fdd3e7c3770a41073ef15e8e1016252df6e767dd7931a2e6c1d30007
Jan 21 08:46:21 np0005590528 systemd[1]: Started Ceph mds.cephfs.compute-0.ddixwa for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 08:46:21 np0005590528 systemd[1]: Started libpod-conmon-d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c.scope.
Jan 21 08:46:21 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d668f61259387b4194989b9bd4467d59e771a6c135b43c1147adfd6bbe768345/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d668f61259387b4194989b9bd4467d59e771a6c135b43c1147adfd6bbe768345/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:21 np0005590528 ceph-mds[95704]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 08:46:21 np0005590528 ceph-mds[95704]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 21 08:46:21 np0005590528 ceph-mds[95704]: main not setting numa affinity
Jan 21 08:46:21 np0005590528 podman[95167]: 2026-01-21 13:46:21.603342663 +0000 UTC m=+0.667621155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:21 np0005590528 ceph-mds[95704]: pidfile_write: ignore empty --pid-file
Jan 21 08:46:21 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mds-cephfs-compute-0-ddixwa[95223]: starting mds.cephfs.compute-0.ddixwa at 
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:21 np0005590528 podman[95167]: 2026-01-21 13:46:21.713219096 +0000 UTC m=+0.777497598 container init d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:21 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa Updating MDS map to version 2 from mon.0
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:21 np0005590528 podman[95167]: 2026-01-21 13:46:21.720674766 +0000 UTC m=+0.784953248 container start d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:21 np0005590528 podman[95167]: 2026-01-21 13:46:21.727995223 +0000 UTC m=+0.792273715 container attach d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:21 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev 3d7f3518-2244-4e51-b382-2c2a8c5fe4f4 (Updating mds.cephfs deployment (+1 -> 1))
Jan 21 08:46:21 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event 3d7f3518-2244-4e51-b382-2c2a8c5fe4f4 (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 08:46:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:22 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 08:46:22 np0005590528 affectionate_carson[95752]: 
Jan 21 08:46:22 np0005590528 affectionate_carson[95752]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 08:46:22 np0005590528 systemd[1]: libpod-d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c.scope: Deactivated successfully.
Jan 21 08:46:22 np0005590528 podman[95167]: 2026-01-21 13:46:22.173470866 +0000 UTC m=+1.237749328 container died d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:22 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d668f61259387b4194989b9bd4467d59e771a6c135b43c1147adfd6bbe768345-merged.mount: Deactivated successfully.
Jan 21 08:46:22 np0005590528 podman[95167]: 2026-01-21 13:46:22.230604425 +0000 UTC m=+1.294882887 container remove d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:22 np0005590528 systemd[1]: libpod-conmon-d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c.scope: Deactivated successfully.
Jan 21 08:46:22 np0005590528 podman[95925]: 2026-01-21 13:46:22.37412162 +0000 UTC m=+0.054997829 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:22 np0005590528 podman[95925]: 2026-01-21 13:46:22.481674356 +0000 UTC m=+0.162550545 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/854169589' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 21 08:46:22 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e3 new map
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-01-21T13:46:22:714883+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T13:46:07.955594+0000#012modified#0112026-01-21T13:46:07.955594+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ddixwa{-1:14256} state up:standby seq 1 addr [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa Updating MDS map to version 3 from mon.0
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa Monitors have assigned me to become a standby
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] up:boot
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] as mds.0
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ddixwa assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ddixwa"} v 0)
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.ddixwa"} : dispatch
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e3 all = 0
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e4 new map
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-01-21T13:46:22:721347+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T13:46:07.955594+0000#012modified#0112026-01-21T13:46:22.721339+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14256}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.ddixwa{0:14256} state up:creating seq 1 addr [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ddixwa=up:creating}
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa Updating MDS map to version 4 from mon.0
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x1
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x100
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x600
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x601
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x602
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x603
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x604
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x605
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x606
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x607
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x608
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x609
Jan 21 08:46:22 np0005590528 ceph-mds[95704]: mds.0.4 creating_done
Jan 21 08:46:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ddixwa is now active in filesystem cephfs as rank 0
Jan 21 08:46:23 np0005590528 python3[96109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:23 np0005590528 podman[96137]: 2026-01-21 13:46:23.145921009 +0000 UTC m=+0.041806120 container create 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:23 np0005590528 systemd[1]: Started libpod-conmon-4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167.scope.
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:46:23 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c7c5aea0d958d2ca7050a2d1d79dac63eeec66d27ec7e3a82d7f933093c77a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c7c5aea0d958d2ca7050a2d1d79dac63eeec66d27ec7e3a82d7f933093c77a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:23 np0005590528 podman[96137]: 2026-01-21 13:46:23.129315738 +0000 UTC m=+0.025200859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:23 np0005590528 podman[96137]: 2026-01-21 13:46:23.229096818 +0000 UTC m=+0.124981939 container init 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:23 np0005590528 podman[96137]: 2026-01-21 13:46:23.234808486 +0000 UTC m=+0.130693597 container start 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:23 np0005590528 podman[96137]: 2026-01-21 13:46:23.238948486 +0000 UTC m=+0.134833617 container attach 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 08:46:23 np0005590528 ansible-async_wrapper.py[94657]: Done in kid B.
Jan 21 08:46:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v79: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: daemon mds.cephfs.compute-0.ddixwa assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: Cluster is now healthy
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: daemon mds.cephfs.compute-0.ddixwa is now active in filesystem cephfs as rank 0
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:46:23 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 37 pg[9.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:23 np0005590528 podman[96245]: 2026-01-21 13:46:23.610510664 +0000 UTC m=+0.045711955 container create 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 21 08:46:23 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} v 0)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 08:46:23 np0005590528 suspicious_shannon[96162]: 
Jan 21 08:46:23 np0005590528 suspicious_shannon[96162]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 21 08:46:23 np0005590528 systemd[1]: Started libpod-conmon-7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e.scope.
Jan 21 08:46:23 np0005590528 podman[96137]: 2026-01-21 13:46:23.667686624 +0000 UTC m=+0.563571755 container died 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:23 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:23 np0005590528 systemd[1]: libpod-4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167.scope: Deactivated successfully.
Jan 21 08:46:23 np0005590528 podman[96245]: 2026-01-21 13:46:23.587206751 +0000 UTC m=+0.022408072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:23 np0005590528 systemd[1]: var-lib-containers-storage-overlay-80c7c5aea0d958d2ca7050a2d1d79dac63eeec66d27ec7e3a82d7f933093c77a-merged.mount: Deactivated successfully.
Jan 21 08:46:23 np0005590528 podman[96245]: 2026-01-21 13:46:23.692805511 +0000 UTC m=+0.128006822 container init 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:23 np0005590528 podman[96245]: 2026-01-21 13:46:23.698171051 +0000 UTC m=+0.133372342 container start 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:23 np0005590528 recursing_mccarthy[96265]: 167 167
Jan 21 08:46:23 np0005590528 podman[96137]: 2026-01-21 13:46:23.709536515 +0000 UTC m=+0.605421626 container remove 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:23 np0005590528 systemd[1]: libpod-7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e.scope: Deactivated successfully.
Jan 21 08:46:23 np0005590528 systemd[1]: libpod-conmon-4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167.scope: Deactivated successfully.
Jan 21 08:46:23 np0005590528 podman[96245]: 2026-01-21 13:46:23.720679354 +0000 UTC m=+0.155880645 container attach 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 08:46:23 np0005590528 podman[96245]: 2026-01-21 13:46:23.721325319 +0000 UTC m=+0.156526610 container died 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e5 new map
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-01-21T13:46:23:724742+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-21T13:46:07.955594+0000#012modified#0112026-01-21T13:46:23.724739+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14256}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14256 members: 14256#012[mds.cephfs.compute-0.ddixwa{0:14256} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 21 08:46:23 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa Updating MDS map to version 5 from mon.0
Jan 21 08:46:23 np0005590528 ceph-mds[95704]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 21 08:46:23 np0005590528 ceph-mds[95704]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 21 08:46:23 np0005590528 ceph-mds[95704]: mds.0.4 recovery_done -- successful recovery!
Jan 21 08:46:23 np0005590528 ceph-mds[95704]: mds.0.4 active_start
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] up:active
Jan 21 08:46:23 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ddixwa=up:active}
Jan 21 08:46:23 np0005590528 systemd[1]: var-lib-containers-storage-overlay-606414b85feb04834ecdd8deae7baff716b9e020c50d4712b319a895bd1eb5e6-merged.mount: Deactivated successfully.
Jan 21 08:46:23 np0005590528 podman[96245]: 2026-01-21 13:46:23.760333841 +0000 UTC m=+0.195535132 container remove 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 08:46:23 np0005590528 systemd[1]: libpod-conmon-7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e.scope: Deactivated successfully.
Jan 21 08:46:23 np0005590528 podman[96304]: 2026-01-21 13:46:23.911758266 +0000 UTC m=+0.035444477 container create ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 08:46:23 np0005590528 systemd[1]: Started libpod-conmon-ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8.scope.
Jan 21 08:46:23 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:23 np0005590528 podman[96304]: 2026-01-21 13:46:23.979368467 +0000 UTC m=+0.103054698 container init ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:23 np0005590528 podman[96304]: 2026-01-21 13:46:23.989256217 +0000 UTC m=+0.112942438 container start ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:23 np0005590528 podman[96304]: 2026-01-21 13:46:23.896913008 +0000 UTC m=+0.020599229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:23 np0005590528 podman[96304]: 2026-01-21 13:46:23.993864538 +0000 UTC m=+0.117550779 container attach ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 08:46:24 np0005590528 lucid_swanson[96320]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:46:24 np0005590528 lucid_swanson[96320]: --> All data devices are unavailable
Jan 21 08:46:24 np0005590528 systemd[1]: libpod-ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8.scope: Deactivated successfully.
Jan 21 08:46:24 np0005590528 podman[96304]: 2026-01-21 13:46:24.530254096 +0000 UTC m=+0.653940357 container died ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:24 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966-merged.mount: Deactivated successfully.
Jan 21 08:46:24 np0005590528 podman[96304]: 2026-01-21 13:46:24.593893071 +0000 UTC m=+0.717579302 container remove ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 08:46:24 np0005590528 systemd[1]: libpod-conmon-ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8.scope: Deactivated successfully.
Jan 21 08:46:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 21 08:46:24 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 08:46:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 21 08:46:24 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 21 08:46:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 21 08:46:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 21 08:46:24 np0005590528 python3[96377]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:24 np0005590528 podman[96428]: 2026-01-21 13:46:24.843082966 +0000 UTC m=+0.047037365 container create 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 08:46:24 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 38 pg[10.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [2] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:24 np0005590528 systemd[1]: Started libpod-conmon-3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9.scope.
Jan 21 08:46:24 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:24 np0005590528 podman[96428]: 2026-01-21 13:46:24.827187673 +0000 UTC m=+0.031142082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:24 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3fd812f315d4a37495b3473e75ec73b68dd7bd4038ad5198361753b9fa22602/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:24 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3fd812f315d4a37495b3473e75ec73b68dd7bd4038ad5198361753b9fa22602/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:24 np0005590528 podman[96428]: 2026-01-21 13:46:24.931627784 +0000 UTC m=+0.135582223 container init 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 08:46:24 np0005590528 podman[96428]: 2026-01-21 13:46:24.945204391 +0000 UTC m=+0.149158800 container start 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 08:46:24 np0005590528 podman[96428]: 2026-01-21 13:46:24.948794459 +0000 UTC m=+0.152748858 container attach 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 08:46:25 np0005590528 podman[96460]: 2026-01-21 13:46:25.031643788 +0000 UTC m=+0.060459630 container create ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 08:46:25 np0005590528 systemd[1]: Started libpod-conmon-ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40.scope.
Jan 21 08:46:25 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:25 np0005590528 podman[96460]: 2026-01-21 13:46:25.103451441 +0000 UTC m=+0.132267293 container init ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:25 np0005590528 podman[96460]: 2026-01-21 13:46:25.010617151 +0000 UTC m=+0.039433033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:25 np0005590528 podman[96460]: 2026-01-21 13:46:25.108266148 +0000 UTC m=+0.137081980 container start ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:25 np0005590528 podman[96460]: 2026-01-21 13:46:25.110975964 +0000 UTC m=+0.139792036 container attach ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 08:46:25 np0005590528 intelligent_sutherland[96493]: 167 167
Jan 21 08:46:25 np0005590528 systemd[1]: libpod-ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40.scope: Deactivated successfully.
Jan 21 08:46:25 np0005590528 conmon[96493]: conmon ccceef272ecbcfd8a367 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40.scope/container/memory.events
Jan 21 08:46:25 np0005590528 podman[96500]: 2026-01-21 13:46:25.149478093 +0000 UTC m=+0.022143666 container died ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:25 np0005590528 systemd[1]: var-lib-containers-storage-overlay-12f1cc892b71f23caf956e9f54db7aa928f1c6e1efd4ee8eb76c738a813ebdd9-merged.mount: Deactivated successfully.
Jan 21 08:46:25 np0005590528 podman[96500]: 2026-01-21 13:46:25.189764285 +0000 UTC m=+0.062429908 container remove ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:25 np0005590528 systemd[1]: libpod-conmon-ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40.scope: Deactivated successfully.
Jan 21 08:46:25 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 08:46:25 np0005590528 pensive_galois[96443]: 
Jan 21 08:46:25 np0005590528 pensive_galois[96443]: [{"container_id": "52571d403aea", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.20%", "created": "2026-01-21T13:45:02.316168Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-21T13:45:02.372493Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174177Z", "memory_usage": 7799308, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-21T13:45:02.184390Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@crash.compute-0", "version": "20.2.0"}, {"container_id": "380cea61fdd3", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "7.84%", "created": "2026-01-21T13:46:21.655932Z", "daemon_id": "cephfs.compute-0.ddixwa", "daemon_name": "mds.cephfs.compute-0.ddixwa", "daemon_type": "mds", "events": ["2026-01-21T13:46:21.731740Z daemon:mds.cephfs.compute-0.ddixwa [INFO] \"Deployed mds.cephfs.compute-0.ddixwa on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174849Z", "memory_usage": 12582912, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-21T13:46:20.337242Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mds.cephfs.compute-0.ddixwa", "version": "20.2.0"}, {"container_id": "e43620387fac", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.15%", "created": "2026-01-21T13:44:22.810508Z", "daemon_id": "compute-0.tnwklj", "daemon_name": "mgr.compute-0.tnwklj", "daemon_type": "mgr", "events": ["2026-01-21T13:45:06.651074Z daemon:mgr.compute-0.tnwklj [INFO] \"Reconfigured mgr.compute-0.tnwklj on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174103Z", "memory_usage": 549768396, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-21T13:44:22.699661Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mgr.compute-0.tnwklj", "version": "20.2.0"}, {"container_id": "cfe4b6f08f6d", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.88%", "created": "2026-01-21T13:44:18.779414Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-21T13:45:05.928055Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.173996Z", "memory_request": 2147483648, "memory_usage": 45382369, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-21T13:44:20.894393Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mon.compute-0", "version": "20.2.0"}, {"container_id": "534fa4fe4148", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.65%", "created": "2026-01-21T13:45:27.274662Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-21T13:45:27.343889Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174262Z", "memory_request": 4294967296, "memory_usage": 56476303, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-21T13:45:27.152712Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@osd.0", "version": "20.2.0"}, {"container_id": "75f58788bd5e", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.80%", "created": "2026-01-21T13:45:31.873769Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-21T13:45:32.072850Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174526Z", "memory_request": 4294967296, "memory_usage": 58625884, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-21T13:45:31.716084Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@osd.1", "version": "20.2.0"}, {"container_id": "391c65d49d06", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.94%", "created": "2026-01-21T13:45:36.373873Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-21T13:45:36.548018Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174617Z", "memory_request": 4294967296, "memory_usage": 56916705, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-21T13:45:36.206432Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@osd.2", "version": "20.2.0"}, {"container_id": "d95768cf4dac", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Jan 21 08:46:25 np0005590528 systemd[1]: libpod-3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9.scope: Deactivated successfully.
Jan 21 08:46:25 np0005590528 podman[96428]: 2026-01-21 13:46:25.384179828 +0000 UTC m=+0.588134227 container died 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:25 np0005590528 podman[96522]: 2026-01-21 13:46:25.39957744 +0000 UTC m=+0.054338713 container create 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:25 np0005590528 podman[96428]: 2026-01-21 13:46:25.437582347 +0000 UTC m=+0.641536746 container remove 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 08:46:25 np0005590528 systemd[1]: Started libpod-conmon-7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7.scope.
Jan 21 08:46:25 np0005590528 systemd[1]: libpod-conmon-3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9.scope: Deactivated successfully.
Jan 21 08:46:25 np0005590528 podman[96522]: 2026-01-21 13:46:25.369975286 +0000 UTC m=+0.024736549 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:25 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cab26400370a1fcffb35b867943086380b28e3c5d226fe951be5b7e0e0ba56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cab26400370a1fcffb35b867943086380b28e3c5d226fe951be5b7e0e0ba56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cab26400370a1fcffb35b867943086380b28e3c5d226fe951be5b7e0e0ba56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:25 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cab26400370a1fcffb35b867943086380b28e3c5d226fe951be5b7e0e0ba56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:25 np0005590528 podman[96522]: 2026-01-21 13:46:25.503485538 +0000 UTC m=+0.158246811 container init 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:25 np0005590528 podman[96522]: 2026-01-21 13:46:25.511162243 +0000 UTC m=+0.165923536 container start 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:25 np0005590528 podman[96522]: 2026-01-21 13:46:25.51519548 +0000 UTC m=+0.169956763 container attach 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 08:46:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v82: 10 pgs: 1 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 21 08:46:25 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c3fd812f315d4a37495b3473e75ec73b68dd7bd4038ad5198361753b9fa22602-merged.mount: Deactivated successfully.
Jan 21 08:46:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 21 08:46:25 np0005590528 rsyslogd[1002]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "52571d403aea", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 21 08:46:25 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 08:46:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 21 08:46:25 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 21 08:46:25 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 39 pg[10.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [2] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:25 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 21 08:46:25 np0005590528 beautiful_black[96551]: {
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:    "0": [
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:        {
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "devices": [
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "/dev/loop3"
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            ],
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_name": "ceph_lv0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_size": "21470642176",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "name": "ceph_lv0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "tags": {
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.crush_device_class": "",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.encrypted": "0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.osd_id": "0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.type": "block",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.vdo": "0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.with_tpm": "0"
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            },
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "type": "block",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "vg_name": "ceph_vg0"
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:        }
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:    ],
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:    "1": [
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:        {
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "devices": [
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "/dev/loop4"
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            ],
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_name": "ceph_lv1",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_size": "21470642176",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "name": "ceph_lv1",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "tags": {
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.crush_device_class": "",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.encrypted": "0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.osd_id": "1",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.type": "block",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.vdo": "0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.with_tpm": "0"
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            },
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "type": "block",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "vg_name": "ceph_vg1"
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:        }
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:    ],
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:    "2": [
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:        {
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "devices": [
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "/dev/loop5"
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            ],
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_name": "ceph_lv2",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_size": "21470642176",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "name": "ceph_lv2",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "tags": {
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.crush_device_class": "",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.encrypted": "0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.osd_id": "2",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.type": "block",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.vdo": "0",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:                "ceph.with_tpm": "0"
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            },
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "type": "block",
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:            "vg_name": "ceph_vg2"
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:        }
Jan 21 08:46:25 np0005590528 beautiful_black[96551]:    ]
Jan 21 08:46:25 np0005590528 beautiful_black[96551]: }
Jan 21 08:46:25 np0005590528 systemd[1]: libpod-7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7.scope: Deactivated successfully.
Jan 21 08:46:25 np0005590528 podman[96522]: 2026-01-21 13:46:25.849767726 +0000 UTC m=+0.504528989 container died 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:25 np0005590528 systemd[1]: var-lib-containers-storage-overlay-89cab26400370a1fcffb35b867943086380b28e3c5d226fe951be5b7e0e0ba56-merged.mount: Deactivated successfully.
Jan 21 08:46:25 np0005590528 podman[96522]: 2026-01-21 13:46:25.891828312 +0000 UTC m=+0.546589565 container remove 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 08:46:25 np0005590528 systemd[1]: libpod-conmon-7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7.scope: Deactivated successfully.
Jan 21 08:46:26 np0005590528 podman[96661]: 2026-01-21 13:46:26.405781868 +0000 UTC m=+0.062767646 container create 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 08:46:26 np0005590528 systemd[1]: Started libpod-conmon-83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f.scope.
Jan 21 08:46:26 np0005590528 python3[96660]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:26 np0005590528 podman[96661]: 2026-01-21 13:46:26.383674815 +0000 UTC m=+0.040660623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:26 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:26 np0005590528 podman[96661]: 2026-01-21 13:46:26.498521197 +0000 UTC m=+0.155507065 container init 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 08:46:26 np0005590528 podman[96661]: 2026-01-21 13:46:26.507524444 +0000 UTC m=+0.164510252 container start 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:26 np0005590528 nostalgic_panini[96677]: 167 167
Jan 21 08:46:26 np0005590528 podman[96661]: 2026-01-21 13:46:26.514091593 +0000 UTC m=+0.171077401 container attach 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:26 np0005590528 systemd[1]: libpod-83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f.scope: Deactivated successfully.
Jan 21 08:46:26 np0005590528 podman[96661]: 2026-01-21 13:46:26.515983448 +0000 UTC m=+0.172969256 container died 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 08:46:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:26 np0005590528 podman[96679]: 2026-01-21 13:46:26.546789512 +0000 UTC m=+0.068698070 container create 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 08:46:26 np0005590528 systemd[1]: var-lib-containers-storage-overlay-a8887e8413acb4a6e7fcf58467c4a11082edc8427dca70f240b2788f13b1037b-merged.mount: Deactivated successfully.
Jan 21 08:46:26 np0005590528 podman[96661]: 2026-01-21 13:46:26.577828881 +0000 UTC m=+0.234814689 container remove 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:26 np0005590528 ceph-mgr[75322]: [progress INFO root] Writing back 5 completed events
Jan 21 08:46:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 08:46:26 np0005590528 systemd[1]: Started libpod-conmon-2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4.scope.
Jan 21 08:46:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:26 np0005590528 systemd[1]: libpod-conmon-83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f.scope: Deactivated successfully.
Jan 21 08:46:26 np0005590528 podman[96679]: 2026-01-21 13:46:26.510118167 +0000 UTC m=+0.032026765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:26 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c51465ba9c61af92a96b5967ff3705951332c35a79edae95ab4303a2ec1c5b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c51465ba9c61af92a96b5967ff3705951332c35a79edae95ab4303a2ec1c5b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 21 08:46:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 21 08:46:26 np0005590528 podman[96679]: 2026-01-21 13:46:26.651206543 +0000 UTC m=+0.173115051 container init 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:46:26 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 21 08:46:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 21 08:46:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 21 08:46:26 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 08:46:26 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:26 np0005590528 podman[96679]: 2026-01-21 13:46:26.657662268 +0000 UTC m=+0.179570776 container start 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:26 np0005590528 podman[96679]: 2026-01-21 13:46:26.660821715 +0000 UTC m=+0.182730223 container attach 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:26 np0005590528 podman[96720]: 2026-01-21 13:46:26.749236809 +0000 UTC m=+0.053028481 container create abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:26 np0005590528 systemd[1]: Started libpod-conmon-abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f.scope.
Jan 21 08:46:26 np0005590528 podman[96720]: 2026-01-21 13:46:26.721347336 +0000 UTC m=+0.025139078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:26 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ded4f40b00c090eb9d59154f13853f2b20f390ab9976472d961f118ec0b11c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ded4f40b00c090eb9d59154f13853f2b20f390ab9976472d961f118ec0b11c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ded4f40b00c090eb9d59154f13853f2b20f390ab9976472d961f118ec0b11c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ded4f40b00c090eb9d59154f13853f2b20f390ab9976472d961f118ec0b11c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:26 np0005590528 podman[96720]: 2026-01-21 13:46:26.862709148 +0000 UTC m=+0.166500880 container init abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 08:46:26 np0005590528 podman[96720]: 2026-01-21 13:46:26.870963237 +0000 UTC m=+0.174754929 container start abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 08:46:26 np0005590528 podman[96720]: 2026-01-21 13:46:26.874751048 +0000 UTC m=+0.178542790 container attach abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/254387530' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 08:46:27 np0005590528 epic_yalow[96709]: 
Jan 21 08:46:27 np0005590528 epic_yalow[96709]: {"fsid":"2f0e9cad-f0a3-5869-9cc3-8d84d071866a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":126,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":40,"num_osds":3,"num_up_osds":3,"osd_up_since":1769003143,"num_in_osds":3,"osd_in_since":1769003119,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":9},{"state_name":"unknown","count":1}],"num_pgs":10,"num_pools":10,"num_objects":29,"data_bytes":463390,"bytes_used":84107264,"bytes_avail":64327819264,"bytes_total":64411926528,"unknown_pgs_ratio":0.10000000149011612,"read_bytes_sec":1279,"write_bytes_sec":5374,"read_op_per_sec":0,"write_op_per_sec":13},"fsmap":{"epoch":5,"btime":"2026-01-21T13:46:23:724742+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.ddixwa","status":"up:active","gid":14256}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-21T13:45:41.522372+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"37e12876-a85b-42c9-8ae6-94fa3a820be5":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 21 08:46:27 np0005590528 systemd[1]: libpod-2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4.scope: Deactivated successfully.
Jan 21 08:46:27 np0005590528 podman[96679]: 2026-01-21 13:46:27.211870046 +0000 UTC m=+0.733778554 container died 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 08:46:27 np0005590528 systemd[1]: var-lib-containers-storage-overlay-8c51465ba9c61af92a96b5967ff3705951332c35a79edae95ab4303a2ec1c5b8-merged.mount: Deactivated successfully.
Jan 21 08:46:27 np0005590528 podman[96679]: 2026-01-21 13:46:27.262696832 +0000 UTC m=+0.784605340 container remove 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 08:46:27 np0005590528 systemd[1]: libpod-conmon-2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4.scope: Deactivated successfully.
Jan 21 08:46:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 2 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 21 08:46:27 np0005590528 lvm[96848]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:46:27 np0005590528 lvm[96849]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:46:27 np0005590528 lvm[96848]: VG ceph_vg0 finished
Jan 21 08:46:27 np0005590528 lvm[96849]: VG ceph_vg1 finished
Jan 21 08:46:27 np0005590528 lvm[96851]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:46:27 np0005590528 lvm[96851]: VG ceph_vg2 finished
Jan 21 08:46:27 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 21 08:46:27 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 41 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 08:46:27 np0005590528 trusting_pare[96755]: {}
Jan 21 08:46:27 np0005590528 systemd[1]: libpod-abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f.scope: Deactivated successfully.
Jan 21 08:46:27 np0005590528 systemd[1]: libpod-abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f.scope: Consumed 1.337s CPU time.
Jan 21 08:46:27 np0005590528 podman[96720]: 2026-01-21 13:46:27.699007925 +0000 UTC m=+1.002799577 container died abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 08:46:27 np0005590528 systemd[1]: var-lib-containers-storage-overlay-21ded4f40b00c090eb9d59154f13853f2b20f390ab9976472d961f118ec0b11c-merged.mount: Deactivated successfully.
Jan 21 08:46:27 np0005590528 ceph-mds[95704]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 21 08:46:27 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mds-cephfs-compute-0-ddixwa[95223]: 2026-01-21T13:46:27.730+0000 7f4ca20fb640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 21 08:46:27 np0005590528 podman[96720]: 2026-01-21 13:46:27.747631889 +0000 UTC m=+1.051423581 container remove abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:46:27 np0005590528 systemd[1]: libpod-conmon-abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f.scope: Deactivated successfully.
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:28 np0005590528 python3[96969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:28 np0005590528 podman[97000]: 2026-01-21 13:46:28.419173088 +0000 UTC m=+0.039482254 container create d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:28 np0005590528 systemd[1]: Started libpod-conmon-d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299.scope.
Jan 21 08:46:28 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/421e495289fb0e711380a992ce33f0e724cb48daf6b34cc6a19664ba83939791/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/421e495289fb0e711380a992ce33f0e724cb48daf6b34cc6a19664ba83939791/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:28 np0005590528 podman[97000]: 2026-01-21 13:46:28.399794311 +0000 UTC m=+0.020103487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:28 np0005590528 podman[97025]: 2026-01-21 13:46:28.511944698 +0000 UTC m=+0.064856497 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 08:46:28 np0005590528 podman[97000]: 2026-01-21 13:46:28.516029367 +0000 UTC m=+0.136338533 container init d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:28 np0005590528 podman[97000]: 2026-01-21 13:46:28.529470161 +0000 UTC m=+0.149779337 container start d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:28 np0005590528 podman[97000]: 2026-01-21 13:46:28.533534179 +0000 UTC m=+0.153843345 container attach d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:28 np0005590528 podman[97025]: 2026-01-21 13:46:28.627329513 +0000 UTC m=+0.180241352 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 08:46:28 np0005590528 radosgw[94709]: v1 topic migration: starting v1 topic migration..
Jan 21 08:46:28 np0005590528 radosgw[94709]: v1 topic migration: finished v1 topic migration
Jan 21 08:46:28 np0005590528 radosgw[94709]: framework: beast
Jan 21 08:46:28 np0005590528 radosgw[94709]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 21 08:46:28 np0005590528 radosgw[94709]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 21 08:46:28 np0005590528 radosgw[94709]: starting handler: beast
Jan 21 08:46:28 np0005590528 radosgw[94709]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 08:46:28 np0005590528 radosgw[94709]: mgrc service_daemon_register rgw.14254 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.xeytxr,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=f4815f66-3704-4561-98d8-80b5d3621d9a,zone_name=default,zonegroup_id=ce8ca06b-86cb-4011-b9c4-0ea7e0974e31,zonegroup_name=default}
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 08:46:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297057373' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 08:46:28 np0005590528 magical_brattain[97034]: 
Jan 21 08:46:29 np0005590528 magical_brattain[97034]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.xeytxr","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 21 08:46:29 np0005590528 systemd[1]: libpod-d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299.scope: Deactivated successfully.
Jan 21 08:46:29 np0005590528 podman[97000]: 2026-01-21 13:46:29.00169956 +0000 UTC m=+0.622008716 container died d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:29 np0005590528 systemd[1]: var-lib-containers-storage-overlay-421e495289fb0e711380a992ce33f0e724cb48daf6b34cc6a19664ba83939791-merged.mount: Deactivated successfully.
Jan 21 08:46:29 np0005590528 podman[97000]: 2026-01-21 13:46:29.042181667 +0000 UTC m=+0.662490823 container remove d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 08:46:29 np0005590528 systemd[1]: libpod-conmon-d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299.scope: Deactivated successfully.
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:29 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:46:29 np0005590528 podman[97363]: 2026-01-21 13:46:29.993095811 +0000 UTC m=+0.043394678 container create 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:30 np0005590528 systemd[1]: Started libpod-conmon-337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c.scope.
Jan 21 08:46:30 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:30 np0005590528 podman[97363]: 2026-01-21 13:46:29.975100127 +0000 UTC m=+0.025398974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:30 np0005590528 podman[97363]: 2026-01-21 13:46:30.070834537 +0000 UTC m=+0.121133394 container init 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:30 np0005590528 podman[97363]: 2026-01-21 13:46:30.076482444 +0000 UTC m=+0.126781291 container start 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:30 np0005590528 podman[97363]: 2026-01-21 13:46:30.079611189 +0000 UTC m=+0.129910046 container attach 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:30 np0005590528 exciting_liskov[97386]: 167 167
Jan 21 08:46:30 np0005590528 systemd[1]: libpod-337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c.scope: Deactivated successfully.
Jan 21 08:46:30 np0005590528 podman[97363]: 2026-01-21 13:46:30.086966537 +0000 UTC m=+0.137265414 container died 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:30 np0005590528 systemd[1]: var-lib-containers-storage-overlay-f41dfaab02671b937ce035cb66d48533c351a2ba5f31cfe76e6258f624e20932-merged.mount: Deactivated successfully.
Jan 21 08:46:30 np0005590528 python3[97375]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:30 np0005590528 podman[97363]: 2026-01-21 13:46:30.131172664 +0000 UTC m=+0.181471501 container remove 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 08:46:30 np0005590528 systemd[1]: libpod-conmon-337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c.scope: Deactivated successfully.
Jan 21 08:46:30 np0005590528 podman[97403]: 2026-01-21 13:46:30.206610855 +0000 UTC m=+0.061282751 container create 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:30 np0005590528 systemd[1]: Started libpod-conmon-6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa.scope.
Jan 21 08:46:30 np0005590528 podman[97403]: 2026-01-21 13:46:30.178601429 +0000 UTC m=+0.033273315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:30 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:30 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a305a0f1f0498ea0c86306810caad44856bec9ab72f28553ae127345a44c79f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:30 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a305a0f1f0498ea0c86306810caad44856bec9ab72f28553ae127345a44c79f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:30 np0005590528 podman[97423]: 2026-01-21 13:46:30.296095995 +0000 UTC m=+0.052127499 container create bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 08:46:30 np0005590528 podman[97403]: 2026-01-21 13:46:30.307970121 +0000 UTC m=+0.162641997 container init 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 08:46:30 np0005590528 podman[97403]: 2026-01-21 13:46:30.315909483 +0000 UTC m=+0.170581379 container start 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:30 np0005590528 podman[97403]: 2026-01-21 13:46:30.321030087 +0000 UTC m=+0.175702013 container attach 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:30 np0005590528 systemd[1]: Started libpod-conmon-bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd.scope.
Jan 21 08:46:30 np0005590528 podman[97423]: 2026-01-21 13:46:30.271183433 +0000 UTC m=+0.027214917 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:30 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:30 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:30 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:30 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:30 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:30 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:30 np0005590528 podman[97423]: 2026-01-21 13:46:30.402026212 +0000 UTC m=+0.158057756 container init bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 21 08:46:30 np0005590528 podman[97423]: 2026-01-21 13:46:30.412200908 +0000 UTC m=+0.168232412 container start bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 08:46:30 np0005590528 podman[97423]: 2026-01-21 13:46:30.417163067 +0000 UTC m=+0.173194621 container attach bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 21 08:46:30 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/618331159' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 21 08:46:30 np0005590528 festive_yonath[97430]: mimic
Jan 21 08:46:30 np0005590528 systemd[1]: libpod-6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa.scope: Deactivated successfully.
Jan 21 08:46:30 np0005590528 podman[97403]: 2026-01-21 13:46:30.761793747 +0000 UTC m=+0.616465703 container died 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:30 np0005590528 systemd[1]: var-lib-containers-storage-overlay-8a305a0f1f0498ea0c86306810caad44856bec9ab72f28553ae127345a44c79f-merged.mount: Deactivated successfully.
Jan 21 08:46:30 np0005590528 podman[97403]: 2026-01-21 13:46:30.820621867 +0000 UTC m=+0.675293773 container remove 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:46:30 np0005590528 systemd[1]: libpod-conmon-6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa.scope: Deactivated successfully.
Jan 21 08:46:30 np0005590528 jovial_kapitsa[97444]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:46:30 np0005590528 jovial_kapitsa[97444]: --> All data devices are unavailable
Jan 21 08:46:31 np0005590528 systemd[1]: libpod-bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd.scope: Deactivated successfully.
Jan 21 08:46:31 np0005590528 podman[97423]: 2026-01-21 13:46:31.021097046 +0000 UTC m=+0.777128580 container died bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:31 np0005590528 systemd[1]: var-lib-containers-storage-overlay-41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97-merged.mount: Deactivated successfully.
Jan 21 08:46:31 np0005590528 podman[97423]: 2026-01-21 13:46:31.081823681 +0000 UTC m=+0.837855185 container remove bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 08:46:31 np0005590528 systemd[1]: libpod-conmon-bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd.scope: Deactivated successfully.
Jan 21 08:46:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 11 KiB/s wr, 211 op/s
Jan 21 08:46:31 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event 37e12876-a85b-42c9-8ae6-94fa3a820be5 (Global Recovery Event) in 10 seconds
Jan 21 08:46:31 np0005590528 podman[97568]: 2026-01-21 13:46:31.613639958 +0000 UTC m=+0.048444890 container create e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:31 np0005590528 systemd[1]: Started libpod-conmon-e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4.scope.
Jan 21 08:46:31 np0005590528 podman[97568]: 2026-01-21 13:46:31.587676861 +0000 UTC m=+0.022481813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:31 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:31 np0005590528 podman[97568]: 2026-01-21 13:46:31.714306349 +0000 UTC m=+0.149111271 container init e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 08:46:31 np0005590528 podman[97568]: 2026-01-21 13:46:31.726475443 +0000 UTC m=+0.161280335 container start e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:46:31 np0005590528 podman[97568]: 2026-01-21 13:46:31.729945976 +0000 UTC m=+0.164750958 container attach e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:46:31 np0005590528 serene_gagarin[97584]: 167 167
Jan 21 08:46:31 np0005590528 systemd[1]: libpod-e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4.scope: Deactivated successfully.
Jan 21 08:46:31 np0005590528 podman[97568]: 2026-01-21 13:46:31.734192848 +0000 UTC m=+0.168997740 container died e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:31 np0005590528 systemd[1]: var-lib-containers-storage-overlay-28336d4a267a254d44df21562f1273742de6dbd485b5b97e503e51dd10bb4021-merged.mount: Deactivated successfully.
Jan 21 08:46:31 np0005590528 podman[97568]: 2026-01-21 13:46:31.772819991 +0000 UTC m=+0.207624873 container remove e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 08:46:31 np0005590528 systemd[1]: libpod-conmon-e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4.scope: Deactivated successfully.
Jan 21 08:46:31 np0005590528 python3[97625]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:46:31 np0005590528 podman[97633]: 2026-01-21 13:46:31.941348629 +0000 UTC m=+0.043625004 container create 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:31 np0005590528 podman[97640]: 2026-01-21 13:46:31.973060315 +0000 UTC m=+0.050548981 container create cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 08:46:31 np0005590528 systemd[1]: Started libpod-conmon-35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962.scope.
Jan 21 08:46:31 np0005590528 systemd[1]: Started libpod-conmon-cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09.scope.
Jan 21 08:46:32 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5771479f22c59a212b39f764ad682b8efa116be2b81db5d21a99796675ded7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5771479f22c59a212b39f764ad682b8efa116be2b81db5d21a99796675ded7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5771479f22c59a212b39f764ad682b8efa116be2b81db5d21a99796675ded7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5771479f22c59a212b39f764ad682b8efa116be2b81db5d21a99796675ded7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:32 np0005590528 podman[97633]: 2026-01-21 13:46:31.922698079 +0000 UTC m=+0.024974514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:32 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:32 np0005590528 podman[97633]: 2026-01-21 13:46:32.021524724 +0000 UTC m=+0.123801119 container init 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 08:46:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006d66e62ea34b9883aacc3a981b68561b42d94f9cb961df6d72eebd811deb41/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006d66e62ea34b9883aacc3a981b68561b42d94f9cb961df6d72eebd811deb41/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:32 np0005590528 podman[97633]: 2026-01-21 13:46:32.030153672 +0000 UTC m=+0.132430057 container start 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 08:46:32 np0005590528 podman[97633]: 2026-01-21 13:46:32.034535159 +0000 UTC m=+0.136811574 container attach 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 08:46:32 np0005590528 podman[97640]: 2026-01-21 13:46:32.038378971 +0000 UTC m=+0.115867647 container init cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:46:32 np0005590528 podman[97640]: 2026-01-21 13:46:32.043815262 +0000 UTC m=+0.121303938 container start cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:46:32 np0005590528 podman[97640]: 2026-01-21 13:46:31.948231185 +0000 UTC m=+0.025719891 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:46:32 np0005590528 podman[97640]: 2026-01-21 13:46:32.04829866 +0000 UTC m=+0.125787336 container attach cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]: {
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:    "0": [
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:        {
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "devices": [
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "/dev/loop3"
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            ],
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_name": "ceph_lv0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_size": "21470642176",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "name": "ceph_lv0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "tags": {
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.crush_device_class": "",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.encrypted": "0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.osd_id": "0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.type": "block",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.vdo": "0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.with_tpm": "0"
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            },
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "type": "block",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "vg_name": "ceph_vg0"
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:        }
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:    ],
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:    "1": [
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:        {
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "devices": [
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "/dev/loop4"
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            ],
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_name": "ceph_lv1",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_size": "21470642176",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "name": "ceph_lv1",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "tags": {
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.crush_device_class": "",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.encrypted": "0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.osd_id": "1",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.type": "block",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.vdo": "0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.with_tpm": "0"
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            },
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "type": "block",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "vg_name": "ceph_vg1"
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:        }
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:    ],
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:    "2": [
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:        {
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "devices": [
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "/dev/loop5"
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            ],
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_name": "ceph_lv2",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_size": "21470642176",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "name": "ceph_lv2",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "tags": {
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.cluster_name": "ceph",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.crush_device_class": "",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.encrypted": "0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.objectstore": "bluestore",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.osd_id": "2",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.type": "block",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.vdo": "0",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:                "ceph.with_tpm": "0"
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            },
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "type": "block",
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:            "vg_name": "ceph_vg2"
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:        }
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]:    ]
Jan 21 08:46:32 np0005590528 admiring_merkle[97665]: }
Jan 21 08:46:32 np0005590528 systemd[1]: libpod-35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962.scope: Deactivated successfully.
Jan 21 08:46:32 np0005590528 podman[97633]: 2026-01-21 13:46:32.318450051 +0000 UTC m=+0.420726436 container died 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:32 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9e5771479f22c59a212b39f764ad682b8efa116be2b81db5d21a99796675ded7-merged.mount: Deactivated successfully.
Jan 21 08:46:32 np0005590528 podman[97633]: 2026-01-21 13:46:32.373316306 +0000 UTC m=+0.475592701 container remove 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:32 np0005590528 systemd[1]: libpod-conmon-35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962.scope: Deactivated successfully.
Jan 21 08:46:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 21 08:46:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516634605' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 21 08:46:32 np0005590528 eager_brattain[97668]: 
Jan 21 08:46:32 np0005590528 eager_brattain[97668]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 21 08:46:32 np0005590528 systemd[1]: libpod-cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09.scope: Deactivated successfully.
Jan 21 08:46:32 np0005590528 podman[97640]: 2026-01-21 13:46:32.569930062 +0000 UTC m=+0.647418748 container died cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 08:46:32 np0005590528 systemd[1]: var-lib-containers-storage-overlay-006d66e62ea34b9883aacc3a981b68561b42d94f9cb961df6d72eebd811deb41-merged.mount: Deactivated successfully.
Jan 21 08:46:32 np0005590528 podman[97640]: 2026-01-21 13:46:32.620921342 +0000 UTC m=+0.698410068 container remove cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 08:46:32 np0005590528 systemd[1]: libpod-conmon-cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09.scope: Deactivated successfully.
Jan 21 08:46:32 np0005590528 podman[97784]: 2026-01-21 13:46:32.841082067 +0000 UTC m=+0.053896262 container create 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:32 np0005590528 systemd[1]: Started libpod-conmon-858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317.scope.
Jan 21 08:46:32 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:32 np0005590528 podman[97784]: 2026-01-21 13:46:32.823117664 +0000 UTC m=+0.035931879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:32 np0005590528 podman[97784]: 2026-01-21 13:46:32.925078845 +0000 UTC m=+0.137893080 container init 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:32 np0005590528 podman[97784]: 2026-01-21 13:46:32.936252175 +0000 UTC m=+0.149066400 container start 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:32 np0005590528 podman[97784]: 2026-01-21 13:46:32.940582989 +0000 UTC m=+0.153397184 container attach 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 08:46:32 np0005590528 gallant_sinoussi[97800]: 167 167
Jan 21 08:46:32 np0005590528 systemd[1]: libpod-858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317.scope: Deactivated successfully.
Jan 21 08:46:32 np0005590528 podman[97784]: 2026-01-21 13:46:32.944046702 +0000 UTC m=+0.156860917 container died 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:32 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6d971799eeeffb1be6369a3510b5933dd3b329e35e99388a924c6119cc5aa7c6-merged.mount: Deactivated successfully.
Jan 21 08:46:32 np0005590528 podman[97784]: 2026-01-21 13:46:32.995887164 +0000 UTC m=+0.208701359 container remove 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:46:33 np0005590528 systemd[1]: libpod-conmon-858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317.scope: Deactivated successfully.
Jan 21 08:46:33 np0005590528 podman[97825]: 2026-01-21 13:46:33.19250016 +0000 UTC m=+0.048344368 container create a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:33 np0005590528 systemd[1]: Started libpod-conmon-a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e.scope.
Jan 21 08:46:33 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:46:33 np0005590528 podman[97825]: 2026-01-21 13:46:33.174356742 +0000 UTC m=+0.030200970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:46:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10f206e6744462faf2fbe57c7353b862d7a81c1b76926fb9772c9d9fa19ce67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10f206e6744462faf2fbe57c7353b862d7a81c1b76926fb9772c9d9fa19ce67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10f206e6744462faf2fbe57c7353b862d7a81c1b76926fb9772c9d9fa19ce67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10f206e6744462faf2fbe57c7353b862d7a81c1b76926fb9772c9d9fa19ce67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:46:33 np0005590528 podman[97825]: 2026-01-21 13:46:33.322281292 +0000 UTC m=+0.178125510 container init a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:46:33 np0005590528 podman[97825]: 2026-01-21 13:46:33.330249705 +0000 UTC m=+0.186093923 container start a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:46:33 np0005590528 podman[97825]: 2026-01-21 13:46:33.333844782 +0000 UTC m=+0.189689000 container attach a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:46:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 9.6 KiB/s wr, 181 op/s
Jan 21 08:46:34 np0005590528 lvm[97918]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:46:34 np0005590528 lvm[97918]: VG ceph_vg0 finished
Jan 21 08:46:34 np0005590528 lvm[97920]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:46:34 np0005590528 lvm[97920]: VG ceph_vg1 finished
Jan 21 08:46:34 np0005590528 lvm[97922]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:46:34 np0005590528 lvm[97922]: VG ceph_vg2 finished
Jan 21 08:46:34 np0005590528 charming_kirch[97841]: {}
Jan 21 08:46:34 np0005590528 systemd[1]: libpod-a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e.scope: Deactivated successfully.
Jan 21 08:46:34 np0005590528 systemd[1]: libpod-a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e.scope: Consumed 1.364s CPU time.
Jan 21 08:46:34 np0005590528 podman[97825]: 2026-01-21 13:46:34.11841501 +0000 UTC m=+0.974259258 container died a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 08:46:34 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b10f206e6744462faf2fbe57c7353b862d7a81c1b76926fb9772c9d9fa19ce67-merged.mount: Deactivated successfully.
Jan 21 08:46:34 np0005590528 podman[97825]: 2026-01-21 13:46:34.424835687 +0000 UTC m=+1.280679905 container remove a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:46:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:46:34 np0005590528 systemd[1]: libpod-conmon-a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e.scope: Deactivated successfully.
Jan 21 08:46:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:46:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:34 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:34 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Jan 21 08:46:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:36 np0005590528 ceph-mgr[75322]: [progress INFO root] Writing back 6 completed events
Jan 21 08:46:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 08:46:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:36 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 6.7 KiB/s wr, 144 op/s
Jan 21 08:46:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:46:39
Jan 21 08:46:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:46:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:46:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'images']
Jan 21 08:46:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:46:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 6.1 KiB/s wr, 131 op/s
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.041523176221679e-07 of space, bias 4.0, pg target 0.0009649827811466015 quantized to 16 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 08:46:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 21 08:46:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:46:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:46:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 21 08:46:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 21 08:46:41 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 21 08:46:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:41 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev 6ea2c028-57ff-4cd8-a4dc-dd541e357001 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 21 08:46:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 21 08:46:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v95: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 17 op/s
Jan 21 08:46:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 08:46:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 21 08:46:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 21 08:46:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:42 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 21 08:46:42 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev ea0f3e94-5f24-4874-b858-f72380263c3a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 21 08:46:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 21 08:46:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:42 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=44 pruub=15.468605995s) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active pruub 80.815086365s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:42 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=44 pruub=15.468605995s) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown pruub 80.815086365s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 21 08:46:43 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev 309baaee-8c82-40b7-82ca-97257dcf4e62 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1e( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.c( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.e( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.10( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.12( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.14( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1a( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1e( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.0( empty local-lis/les=44/45 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.e( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.12( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.10( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.14( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v98: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 08:46:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 21 08:46:44 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev d29d1668-d126-47d3-b5d4-e19525facf01 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 21 08:46:44 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46 pruub=15.843464851s) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active pruub 87.345634460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:44 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46 pruub=15.843464851s) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown pruub 87.345634460s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 21 08:46:44 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=8.559959412s) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active pruub 85.138343811s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:44 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=8.559959412s) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown pruub 85.138343811s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 21 08:46:45 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev 7841d6e4-20a9-4b84-aa54-4dcb82cb141c (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1e( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1c( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1d( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1b( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1f( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1a( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.7( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.19( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.6( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.18( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.3( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.5( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.8( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.a( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.b( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.4( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.2( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.9( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1f( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1e( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1c( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1d( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.e( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.f( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.d( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.c( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.10( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.7( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.b( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.12( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.11( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.13( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.15( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.16( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.17( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.8( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.6( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1b( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.a( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1b( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.14( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.5( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1a( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.9( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.4( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.3( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.19( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.2( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.c( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.d( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.f( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.e( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.10( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.11( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.12( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.13( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.15( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.14( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.16( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.17( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.18( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1f( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1e( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1c( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1d( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1a( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1c( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1d( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.7( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.6( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.19( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.3( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.a( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.18( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.4( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.b( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.2( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.8( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.5( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.9( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.0( empty local-lis/les=46/47 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.b( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.6( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.c( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.5( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.d( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.10( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.7( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.9( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.8( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.4( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.3( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.12( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.11( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.13( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.15( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.17( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.16( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.14( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1b( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.19( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.2( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.c( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.0( empty local-lis/les=46/47 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.d( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.f( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.e( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.13( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.10( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.12( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.11( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.16( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.15( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.17( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.14( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.18( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v101: 104 pgs: 2 peering, 62 unknown, 40 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 08:46:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 21 08:46:45 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 21 08:46:45 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 21 08:46:45 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 21 08:46:46 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev 3b62b1f7-921e-454b-bb9f-f74107de3873 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:46 np0005590528 ceph-mgr[75322]: [progress WARNING root] Starting Global Recovery Event,110 pgs not in active + clean state
Jan 21 08:46:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 48 pg[6.0( v 38'39 (0'0,38'39] local-lis/les=25/26 n=22 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=48 pruub=9.888830185s) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 36'38 mlcod 36'38 active pruub 89.168746948s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 48 pg[6.0( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=48 pruub=9.888830185s) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 36'38 mlcod 0'0 unknown pruub 89.168746948s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 21 08:46:47 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.4( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.5( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.7( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.2( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.e( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.c( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:47 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev e0201a3f-5d88-4b29-a15d-92205f718d90 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 21 08:46:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 21 08:46:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v104: 150 pgs: 2 peering, 108 unknown, 40 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 08:46:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 08:46:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 48 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48 pruub=14.814599991s) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active pruub 85.865051270s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48 pruub=14.814599991s) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown pruub 85.865051270s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.7( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.8( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.9( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.12( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.13( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.14( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.15( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.3( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.2( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.a( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.b( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.16( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.17( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.18( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.19( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.c( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.d( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.e( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.5( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.6( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.4( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.f( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.10( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.11( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1a( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1b( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1c( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1d( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1e( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1f( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 21 08:46:48 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev c6da06d2-209d-4646-8967-ce2e4e0098de (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.0( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 36'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.10( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1f( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.17( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.8( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.a( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.b( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.0( empty local-lis/les=48/50 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.6( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.e( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.d( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1c( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1b( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 21 08:46:48 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 21 08:46:49 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev b85d4030-337d-448f-a812-f898b5ae1624 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 21 08:46:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v107: 212 pgs: 1 peering, 93 unknown, 118 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 08:46:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 21 08:46:49 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 21 08:46:49 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 50 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=50 pruub=8.076550484s) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active pruub 85.412124634s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 50 pg[8.0( v 35'6 (0'0,35'6] local-lis/les=34/35 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=50 pruub=11.691671371s) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 35'5 mlcod 35'5 active pruub 89.027275085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 50 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=50 pruub=8.076550484s) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown pruub 85.412124634s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 50 pg[8.0( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=50 pruub=11.691671371s) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 35'5 mlcod 0'0 unknown pruub 89.027275085s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.7( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.b( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.d( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.10( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.12( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.14( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.16( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.17( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.19( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1d( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1e( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1( v 35'6 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.2( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.3( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.4( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.5( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.6( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.7( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.8( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.9( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.a( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.b( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.d( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.e( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.c( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.10( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.11( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.12( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.13( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.15( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.14( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.16( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.17( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.18( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.19( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1a( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1b( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1c( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1d( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1e( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:49 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev b51c9854-0c23-4ac9-ae5d-13a0adeaed63 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev 6ea2c028-57ff-4cd8-a4dc-dd541e357001 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event 6ea2c028-57ff-4cd8-a4dc-dd541e357001 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev ea0f3e94-5f24-4874-b858-f72380263c3a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event ea0f3e94-5f24-4874-b858-f72380263c3a (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev 309baaee-8c82-40b7-82ca-97257dcf4e62 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event 309baaee-8c82-40b7-82ca-97257dcf4e62 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev d29d1668-d126-47d3-b5d4-e19525facf01 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event d29d1668-d126-47d3-b5d4-e19525facf01 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev 7841d6e4-20a9-4b84-aa54-4dcb82cb141c (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event 7841d6e4-20a9-4b84-aa54-4dcb82cb141c (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev 3b62b1f7-921e-454b-bb9f-f74107de3873 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event 3b62b1f7-921e-454b-bb9f-f74107de3873 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 21 08:46:50 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 52 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=38/39 n=9 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=52 pruub=15.396927834s) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 41'17 mlcod 41'17 active pruub 88.580261230s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev e0201a3f-5d88-4b29-a15d-92205f718d90 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event e0201a3f-5d88-4b29-a15d-92205f718d90 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev c6da06d2-209d-4646-8967-ce2e4e0098de (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event c6da06d2-209d-4646-8967-ce2e4e0098de (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev b85d4030-337d-448f-a812-f898b5ae1624 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event b85d4030-337d-448f-a812-f898b5ae1624 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev b51c9854-0c23-4ac9-ae5d-13a0adeaed63 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 21 08:46:50 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event b51c9854-0c23-4ac9-ae5d-13a0adeaed63 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[9.0( v 42'483 (0'0,42'483] local-lis/les=36/37 n=210 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=52 pruub=13.358799934s) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 42'482 mlcod 42'482 active pruub 91.048835754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 52 pg[10.0( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=52 pruub=15.396927834s) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 41'17 mlcod 0'0 unknown pruub 88.580261230s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[9.0( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=52 pruub=13.358799934s) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 42'482 mlcod 0'0 unknown pruub 91.048835754s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26600 space 0x562353353d40 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b8fe80 space 0x562353335740 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b23680 space 0x562353c68e40 0x0~98 clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5ca80 space 0x562353360e40 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b64100 space 0x562352de0240 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353be6000 space 0x562353361a40 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a700 space 0x56235404ce40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b09700 space 0x562352da5740 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353bde600 space 0x562352d37140 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b0c380 space 0x562352da4540 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b09300 space 0x562352daa840 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a500 space 0x562352d00540 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b23c80 space 0x56235331d440 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b0c180 space 0x562352da4e40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b09100 space 0x562352dab140 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b65f00 space 0x56235330b140 0x0~98 clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26a00 space 0x562353335d40 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b23100 space 0x562352da6840 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b08a80 space 0x562352d00e40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a900 space 0x56235404d740 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26d80 space 0x562352cf2540 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b0c800 space 0x562352da8e40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a300 space 0x562353451a40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b09500 space 0x562352da9d40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5dc80 space 0x56235330a840 0x0~98 clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0ab00 space 0x562352d01d40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b72580 space 0x562352de1740 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b72b00 space 0x562352da7140 0x0~98 clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5da00 space 0x562353334840 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26680 space 0x562353c92540 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353bdff00 space 0x56235331cb40 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0ad00 space 0x562352cf5a40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b72280 space 0x562353353140 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26b00 space 0x562353328540 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353bdeb00 space 0x562352d36840 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5c480 space 0x562353c68240 0x0~98 clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b0d380 space 0x562353411440 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26d00 space 0x562352dda240 0x0~98 clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5c680 space 0x562353360540 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353a83880 space 0x562353361440 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353bf5300 space 0x562352cf5140 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b0ca00 space 0x562352da8540 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b72100 space 0x562353345d40 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a080 space 0x562353451140 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b23580 space 0x562352de0b40 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5ca00 space 0x562353352b40 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5db80 space 0x562352dbcb40 0x0~98 clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353be6c80 space 0x562353450840 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5d300 space 0x562353352540 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b08c80 space 0x562352dd6240 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a680 space 0x562353344e40 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26f80 space 0x562352da9740 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b09180 space 0x56235333f140 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b9f080 space 0x562353410b40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b08f00 space 0x562353345740 0x0~9a clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0af00 space 0x562352dd7d40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353bf5f80 space 0x562352cf4840 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0ab80 space 0x562352dbd440 0x0~98 clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26880 space 0x562352daba40 0x0~6e clean)
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.16( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.17( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1e( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.19( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1d( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.13( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.7( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.a( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.8( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.3( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.0( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 35'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=50/52 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.d( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.7( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.5( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.b( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.14( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.16( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.19( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.10( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.17( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.12( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1e( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:50 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:50 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 21 08:46:50 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 21 08:46:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 21 08:46:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 21 08:46:51 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.11( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.12( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.10( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.19( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.7( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.18( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.6( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.5( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.4( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.3( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.8( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.9( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.2( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.13( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.16( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.14( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.17( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.15( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.12( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1d( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.14( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.17( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.16( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.11( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.15( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1c( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.10( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.13( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.12( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.d( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.c( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.f( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.b( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.2( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.9( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.e( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.a( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.8( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.3( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.6( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.4( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1a( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.5( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1b( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.18( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1e( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.19( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1f( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1c( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1d( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.5( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.18( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.3( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 41'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.9( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.d( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.c( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.14( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.15( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.7( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.14( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 systemd[76413]: Starting Mark boot as successful...
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.10( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.12( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.2( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.0( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 42'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.a( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1a( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.4( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.5( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.18( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:51 np0005590528 systemd[76413]: Finished Mark boot as successful.
Jan 21 08:46:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v110: 274 pgs: 124 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 08:46:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:51 np0005590528 ceph-mgr[75322]: [progress INFO root] Writing back 16 completed events
Jan 21 08:46:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 08:46:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 21 08:46:51 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 21 08:46:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 21 08:46:52 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 08:46:52 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:46:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 21 08:46:52 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 21 08:46:52 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 54 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54 pruub=15.032677650s) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active pruub 95.111892700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:52 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 54 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54 pruub=15.032677650s) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown pruub 95.111892700s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 155 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 08:46:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 21 08:46:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 21 08:46:53 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.16( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.17( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.15( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.14( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.13( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.12( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.11( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.10( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.e( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.f( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.d( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.b( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.2( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.3( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.c( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.8( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.9( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.4( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.a( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.5( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.6( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.7( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.18( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1a( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.19( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1b( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1c( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1f( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1d( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1e( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.14( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.15( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.16( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.12( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.13( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.11( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.17( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.10( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.d( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.0( empty local-lis/les=54/55 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.3( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.c( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.2( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.8( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.4( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.9( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.5( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.7( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.6( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1a( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.a( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.18( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.19( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1d( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1c( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:53 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 21 08:46:53 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 21 08:46:53 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 21 08:46:53 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 21 08:46:53 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 21 08:46:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 21 08:46:55 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.12( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.574015617s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.188224792s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1d( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561406136s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.175674438s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.12( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573951721s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.188224792s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1d( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561349869s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.175674438s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573716164s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.188201904s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1e( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565937996s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180458069s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573684692s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.188201904s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1e( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565897942s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180458069s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.19( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389556885s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004173279s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.19( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389475822s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004173279s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573497772s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.188247681s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573477745s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.188247681s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.18( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389277458s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004257202s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.17( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389109612s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004112244s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.17( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389085770s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004112244s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573266983s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.188316345s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.18( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389241219s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004257202s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573219299s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.188316345s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.16( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388842583s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004112244s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.16( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388818741s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004112244s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.11( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565226555s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180580139s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.15( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388652802s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004013062s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.12( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565180779s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180610657s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.11( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565173149s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180580139s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.15( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388619423s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004013062s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.12( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565144539s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180610657s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.13( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565045357s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180618286s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.13( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565014839s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180618286s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.13( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388439178s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004112244s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.14( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564970970s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180664062s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.13( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388404846s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004112244s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.14( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564939499s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180664062s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578927994s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.194740295s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578907013s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.194740295s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.15( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564669609s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180671692s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.15( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564632416s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180671692s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578118324s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.194190979s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.11( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.387817383s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003997803s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578083992s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.194190979s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.11( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.387792587s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003997803s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.16( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564449310s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180702209s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.16( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564416885s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180702209s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.576922417s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.193344116s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.576900482s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.193344116s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.387430191s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003913879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.387395859s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003913879s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.9( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564129829s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180755615s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.9( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564108849s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180755615s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386993408s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003784180s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386961937s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003784180s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577178955s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.194015503s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386891365s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003829956s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577104568s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.194015503s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386854172s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003829956s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.c( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563767433s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180786133s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578066826s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.195190430s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578038216s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.195190430s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.7( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563652039s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180862427s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577312469s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.194526672s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.c( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563675880s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180786133s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.7( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563619614s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180862427s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577281952s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.194526672s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.7( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386214256s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003608704s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.7( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386193275s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003608704s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.8( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386737823s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004241943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.f( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563352585s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180862427s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.8( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386703491s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004241943s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.9( v 53'19 (0'0,53'19] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577460289s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.195045471s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.f( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563322067s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180862427s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.9( v 53'19 (0'0,53'19] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577426910s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.195045471s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575845718s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.193817139s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575806618s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.193817139s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.2( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.385487556s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003616333s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.5( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.562740326s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180923462s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.5( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.562505722s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180923462s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.3( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384768486s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003486633s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577229500s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.195953369s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.3( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384731293s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003486633s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577185631s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.195953369s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.4( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.562129021s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180946350s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.4( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561977386s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180946350s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.2( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.385453224s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003616333s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.3( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561805725s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181121826s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.4( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384226799s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003570557s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.4( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384200096s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003570557s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.d( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.576145172s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.195617676s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.5( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384104729s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003601074s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.2( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561473846s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181007385s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.d( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.576097488s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.195617676s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.5( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384060860s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003601074s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.e( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575691223s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.195297241s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.2( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561427116s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181007385s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.e( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575486183s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.195297241s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.6( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383355141s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003334045s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.6( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383328438s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003334045s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561014175s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181030273s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575405121s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.195426941s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575377464s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.195426941s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.3( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561057091s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181121826s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.560978889s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181030273s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.9( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383260727s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003448486s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.9( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383241653s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003448486s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575750351s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.196014404s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383036613s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003326416s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575729370s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.196014404s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383007050s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003326416s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575522423s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.195991516s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382740021s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003219604s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382728577s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003250122s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382686615s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003219604s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.15( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575555801s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.196105957s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.14( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575508118s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.196075439s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.15( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575522423s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.196105957s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.14( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575474739s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.196075439s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1a( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.560396194s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181091309s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1a( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.560376167s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181091309s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575473785s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.195991516s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382456779s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003234863s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382434845s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003234863s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575113297s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.195976257s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575053215s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.195976257s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.19( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.560072899s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181106567s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382429123s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003524780s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.19( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.560041428s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181106567s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.574926376s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.196052551s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382410049s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003524780s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.574903488s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.196052551s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.18( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.559853554s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181106567s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.18( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.559838295s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181106567s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.381968498s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003250122s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.11( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.17( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.13( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.15( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.12( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.1a( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.19( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.1e( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.18( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.19( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.16( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.9( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.6( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.d( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.9( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.f( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.3( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.8( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.b( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.5( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.2( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.7( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.a( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.15( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.1d( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.4( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.c( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.4( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.9( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.1c( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.4( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.f( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.f( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.7( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.6( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.7( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.5( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.1( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.2( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.11( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.10( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.1f( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.17( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.1b( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.13( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.12( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.2( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.d( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.1d( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.3( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.e( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.b( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.8( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.1a( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.14( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.18( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.1( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.19( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.368993759s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515510559s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.559500694s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706054688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544999123s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.691581726s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.559462547s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706054688s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544964790s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.691581726s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.16( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.368938446s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515510559s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.17( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.959045410s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106040955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.17( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.958992958s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106040955s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544333458s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.691574097s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544312477s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.691574097s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.368309975s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515602112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.368269920s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515602112s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544124603s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.691528320s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544080734s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.691528320s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.561562538s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709175110s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.1e( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1d( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367938995s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515579224s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.561527252s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709175110s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.15( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954565048s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.102287292s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1d( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367905617s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515579224s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.15( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.15( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954534531s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.102287292s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.14( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.556081772s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.703994751s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954298973s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.102249146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.13( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.556056023s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.703994751s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954261780s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.102249146s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.11( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.557409286s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705558777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.16( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.557368279s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705558777s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.18( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.370068550s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292808533s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.18( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.370034218s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292808533s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.13( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369175911s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292221069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.13( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369147301s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292221069s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.14( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369690895s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292770386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.14( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369644165s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292770386s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.12( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369027138s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292236328s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1b( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.361879349s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.511253357s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1b( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.361842155s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.511253357s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.556145668s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.705642700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.556205750s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705650330s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.1a( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.556103706s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.705642700s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.556076050s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705650330s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.956342697s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.105949402s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.1e( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.956317902s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.105949402s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.559554100s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709350586s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.555890083s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.705718994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.555866241s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.705718994s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.11( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.956044197s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.105979919s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.11( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.956008911s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.105979919s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.12( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369009972s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292236328s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.15( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.11( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367939949s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292259216s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.10( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367845535s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292190552s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.10( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367817879s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292190552s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.11( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367897987s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292259216s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.1d( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.e( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367544174s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292160034s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.e( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367481232s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292160034s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.d( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366911888s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291801453s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.d( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366884232s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291801453s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.525283813s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.450225830s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.525240898s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.450225830s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.f( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366933823s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291847229s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.2( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366449356s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291748047s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.f( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366568565s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291847229s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.2( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366422653s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291748047s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.524724960s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.450088501s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.365671158s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291137695s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.15( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.524532318s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.450088501s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.555399895s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.705749512s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.555373192s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.705749512s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.559520721s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709350586s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.558624268s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709266663s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.558599472s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709266663s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.18( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.365021706s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515914917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.18( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364998817s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515914917s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.554696083s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705848694s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.554672241s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705848694s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.365622520s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291137695s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954703331s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106155396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.525206566s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.450164795s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954679489s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106155396s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.554334641s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705886841s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.7( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363949776s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515586853s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.524435043s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.450164795s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553997040s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705886841s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.7( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363672256s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515586853s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.4( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.365040779s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290977478s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.557277679s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709304810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.524024010s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.449981689s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.557236671s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709304810s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523996353s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.449981689s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.953858376s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106048584s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.4( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.365002632s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290977478s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.953831673s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106048584s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553622246s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705924988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.9( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364850998s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290985107s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553588867s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705924988s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.6( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363206863s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515594482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523867607s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.450050354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.6( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363186836s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515594482s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.9( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364809990s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290985107s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523848534s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.450050354s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553456306s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706092834s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364743233s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291030884s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553420067s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706092834s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364717484s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291030884s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.d( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.953078270s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106094360s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523504257s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.449890137s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552910805s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705947876s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.d( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.953042030s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106094360s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523486137s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.449890137s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552885056s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705947876s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364090919s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290573120s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552580833s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.705924988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552537918s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.705924988s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364062309s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290573120s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.5( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363142014s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.516761780s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.5( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363111496s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.516761780s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523032188s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.449851990s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523010254s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.449851990s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.7( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363712311s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290596008s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.7( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363683701s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290596008s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.522802353s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.449867249s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.5( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363575935s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290534973s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.522778511s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.449867249s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.8( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363784790s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290969849s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.8( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363761902s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290969849s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1b( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364582062s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291725159s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1b( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364463806s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291725159s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1c( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.359720230s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.287040710s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1c( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.359689713s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.287040710s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.14( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.5( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.362443924s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290534973s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.1f( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.18( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.13( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.11( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.e( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552035332s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.705978394s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.1( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.551997185s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.705978394s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.1a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.555334091s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709403992s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.555305481s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709403992s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.12( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.951875687s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106101990s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.951852798s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106101990s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.3( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.361502647s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515792847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.3( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.361481667s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515792847s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.555057526s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709495544s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.9( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954181671s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108642578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.555035591s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709495544s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.10( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.951672554s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106094360s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.361167908s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515792847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.551523209s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706146240s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.10( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.950963974s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106094360s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.9( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.953539848s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108642578s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.554125786s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709457397s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.554110527s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709457397s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.11( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.550776482s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706192017s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.550748825s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706192017s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.8( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.360742569s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.516227722s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.8( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.360729218s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.516227722s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.1b( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.2( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949995995s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106147766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.2( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949970245s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106147766s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.549811363s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706161499s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.549795151s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706161499s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.359342575s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515792847s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.11( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.3( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949330330s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106147766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.a( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.359076500s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515907288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.3( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949302673s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106147766s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.549243927s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706199646s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.549221992s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706199646s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.552453041s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709503174s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.552430153s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709503174s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.549049377s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706245422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.a( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.359049797s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515907288s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.12( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548954010s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706230164s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548913956s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706146240s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548999786s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706245422s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548927307s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706230164s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.8( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948949814s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106262207s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.14( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548859596s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706268311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.8( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948909760s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106262207s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548843384s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706268311s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548896790s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706405640s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548877716s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706405640s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548740387s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706306458s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548727989s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706306458s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.18( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.18( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.1b( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.1c( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.947653770s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106277466s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.10( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.947635651s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106277466s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547548294s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706329346s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547554970s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706336975s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.947497368s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106307983s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547513962s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706329346s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547524452s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706336975s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.550722122s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709564209s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.947466850s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106307983s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.550680161s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709564209s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.c( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357912064s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.516860962s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.c( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357893944s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.516860962s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547277451s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706367493s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.551300049s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.710418701s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.551282883s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.710418701s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.7( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547235489s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706367493s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.2( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547184944s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706382751s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.6( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949555397s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108810425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.6( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949542046s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108810425s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547147751s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706382751s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.1f( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357487679s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.516868591s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357476234s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.516868591s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547074318s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706504822s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547037125s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706504822s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.358347893s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.517913818s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.358334541s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.517913818s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.550650597s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 42'483 active pruub 94.710289001s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.546828270s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706474304s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.18( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949105263s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108840942s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.550603867s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 42'483 unknown NOTIFY pruub 94.710289001s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.18( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949076653s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108840942s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553970337s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.713775635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.546689987s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706474304s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553945541s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.713775635s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.19( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948929787s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108871460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.554060936s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.714012146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.554046631s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.714012146s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.19( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948907852s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108871460s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.11( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357830048s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.517883301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553935051s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.714035034s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.549504280s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709617615s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.11( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357792854s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.517883301s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.549485207s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709617615s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.9( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356709480s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.516860962s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.9( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356665611s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.516860962s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553778648s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.714035034s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.12( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357501030s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.517868042s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.12( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357481003s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.517868042s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948439598s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108833313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948400497s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108833313s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948363304s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108863831s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948298454s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108863831s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.d( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948136330s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108894348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552777290s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.714118958s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552760124s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.714118958s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.947521210s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108894348s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552537918s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.714134216s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552523613s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.714134216s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548697472s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.710365295s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553496361s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.715225220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548639297s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.710365295s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553483963s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.715225220s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548583031s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.710357666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.15( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356245995s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.518089294s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548546791s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.710357666s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.15( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356234550s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.518089294s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.946858406s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108917236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.16( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356202126s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.518264771s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.946837425s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108917236s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553113937s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.715209961s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553086281s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.715209961s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.16( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356178284s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.518264771s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.946687698s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108917236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.946675301s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108917236s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548074722s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.710395813s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548045158s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.710395813s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.17( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.355474472s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.518211365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.17( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.355379105s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.518211365s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552412033s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.715270996s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552382469s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.715270996s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.551445961s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.715240479s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.551402092s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.715240479s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.12( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.6( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.10( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.1( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.3( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.14( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.5( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.9( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.c( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.8( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.3( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.2( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.e( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.3( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.9( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.5( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.8( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.2( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.1( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.3( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.8( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.2( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.f( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.1( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.e( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.a( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.1( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.18( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.9( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.1b( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.7( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.4( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.a( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.15( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.11( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.4( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.9( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.4( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.1a( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.1b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.1c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.b( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.7( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.11( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.6( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.1e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.9( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.16( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.1( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.1c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.8( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.3( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.c( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.9( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.6( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.6( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.4( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.f( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.1b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.1c( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.5( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.9( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.1a( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.12( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.18( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.15( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.1d( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.17( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.13( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 21 08:46:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.5( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:55 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.1e( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.1a( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.1b( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.18( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.1a( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.12( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.1d( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.15( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.c( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.7( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.3( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.5( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.d( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.b( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.8( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=56/57 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.8( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.e( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.2( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.5( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.1( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.1( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.9( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.2( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.e( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.8( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.a( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.a( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=56/57 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.11( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.15( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.18( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.1b( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.e( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.11( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.1a( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.1f( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.1c( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.13( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.16( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.1e( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.11( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.11( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.18( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.1c( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.1c( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.11( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.14( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.15( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.13( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.8( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.3( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.b( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.e( v 53'19 lc 39'4 (0'0,53'19] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.16( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.2( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.1f( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.2( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.d( v 53'19 lc 39'5 (0'0,53'19] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.5( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.1c( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.4( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.f( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.1d( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.1e( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.18( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.15( v 53'19 lc 39'3 (0'0,53'19] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.9( v 53'19 lc 39'8 (0'0,53'19] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.19( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.1b( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.f( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.c( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.7( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.3( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.1( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.17( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.6( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.a( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.9( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.13( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.15( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.15( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.17( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.12( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.16( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.12( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.9( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.1f( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.8( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.f( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.d( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.5( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.a( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.3( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.c( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.9( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.4( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.7( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.1( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.6( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.11( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.1b( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.1d( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.1a( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.12( v 53'19 lc 41'17 (0'0,53'19] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.18( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.14( v 53'19 lc 39'7 (0'0,53'19] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.19( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.6( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=56/57 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.2( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.4( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.d( v 38'39 lc 36'13 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.f( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.f( v 38'39 lc 36'1 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.d( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.7( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.5( v 38'39 lc 36'11 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.5( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.7( v 38'39 lc 36'21 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.9( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.14( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.12( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.10( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:46:56 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event 9a8c3382-30e7-4c11-b909-be7da62b8856 (Global Recovery Event) in 10 seconds
Jan 21 08:46:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 21 08:46:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v117: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:46:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 21 08:46:57 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 21 08:46:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 21 08:46:57 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 21 08:46:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 21 08:46:57 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=53'484 lcod 42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:57 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 21 08:46:57 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 21 08:46:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 21 08:46:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 08:46:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 08:46:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 21 08:46:58 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.998245239s) [0] async=[0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 active pruub 101.008323669s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.998225212s) [0] async=[0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 active pruub 101.008255005s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.998094559s) [0] async=[0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 active pruub 101.008239746s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.998041153s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008239746s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.998172760s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008323669s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.997398376s) [0] async=[0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 active pruub 101.008316040s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.997216225s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008316040s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.997075081s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008255005s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.660623550s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 active pruub 104.448310852s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.660583496s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 104.448310852s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.662180901s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 active pruub 104.450340271s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.662142754s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 104.450340271s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.662052155s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 active pruub 104.450317383s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.662023544s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 104.450317383s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.661510468s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 active pruub 104.450065613s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:58 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.661439896s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 104.450065613s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[6.a( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[6.2( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[6.e( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:58 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[6.6( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:58 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 08:46:58 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 08:46:58 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 21 08:46:58 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 21 08:46:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 524 B/s, 6 objects/s recovering
Jan 21 08:46:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 21 08:46:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 21 08:46:59 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.700810432s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.010345459s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.700797081s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.010353088s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698727608s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008323669s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.700737000s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.010345459s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.700737953s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.010353088s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698695183s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008323669s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699062347s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008819580s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699033737s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008819580s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698680878s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008621216s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698640823s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008621216s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698682785s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008827209s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698513031s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008689880s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698637009s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008827209s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698471069s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008689880s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.5( v 58'486 (0'0,58'486] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698368073s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=53'484 lcod 58'485 active pruub 101.008743286s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699906349s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.010353088s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.5( v 58'486 (0'0,58'486] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698307037s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=53'484 lcod 58'485 unknown NOTIFY pruub 101.008743286s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699880600s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.010353088s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699876785s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.010360718s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699830055s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.010360718s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698328972s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008880615s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698302269s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008880615s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698155403s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008804321s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698122025s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008804321s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.5( v 58'486 (0'0,58'486] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=53'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.5( v 58'486 (0'0,58'486] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=53'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=59/60 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[6.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=59/60 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[6.e( v 38'39 lc 36'19 (0'0,38'39] local-lis/les=59/60 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:59 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=59/60 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:46:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=59/60 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 21 08:47:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 21 08:47:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 21 08:47:00 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.5( v 58'486 (0'0,58'486] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=58'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 12 active+remapped, 8 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s, 2 keys/s, 30 objects/s recovering
Jan 21 08:47:01 np0005590528 ceph-mgr[75322]: [progress INFO root] Writing back 17 completed events
Jan 21 08:47:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 08:47:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:47:02 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 21 08:47:02 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:47:02 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 21 08:47:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 12 active+remapped, 8 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s, 1 keys/s, 20 objects/s recovering
Jan 21 08:47:04 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 21 08:47:04 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 21 08:47:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v125: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 893 B/s, 1 keys/s, 17 objects/s recovering
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 21 08:47:05 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 21 08:47:05 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 21 08:47:05 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 21 08:47:06 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389625549s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 active pruub 108.017753601s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:06 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389667511s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 active pruub 108.017982483s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:06 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389375687s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 active pruub 108.018028259s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:06 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389333725s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 unknown NOTIFY pruub 108.018028259s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:06 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389297485s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 unknown NOTIFY pruub 108.017982483s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:06 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389019966s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 active pruub 108.018058777s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:06 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.388999939s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 unknown NOTIFY pruub 108.018058777s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:06 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389514923s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 unknown NOTIFY pruub 108.017753601s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:06 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 62 pg[6.7( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:06 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:06 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:06 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 62 pg[6.3( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:06 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 21 08:47:06 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 21 08:47:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 21 08:47:06 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 08:47:06 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 08:47:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 21 08:47:06 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 21 08:47:06 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 63 pg[6.7( v 38'39 lc 36'21 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:06 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 63 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:06 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 63 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=62/63 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=38'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:06 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 63 pg[6.f( v 38'39 lc 36'1 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 21 08:47:07 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 64 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64 pruub=12.313432693s) [1] r=-1 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 active pruub 112.450233459s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:07 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 64 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64 pruub=12.313391685s) [1] r=-1 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 112.450233459s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:07 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 64 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64 pruub=12.313249588s) [1] r=-1 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 active pruub 112.450508118s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:07 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 64 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64 pruub=12.313024521s) [1] r=-1 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 112.450508118s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 21 08:47:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 21 08:47:07 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 64 pg[6.c( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64) [1] r=0 lpr=64 pi=[48,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:07 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 64 pg[6.4( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64) [1] r=0 lpr=64 pi=[48,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 21 08:47:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 21 08:47:08 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 21 08:47:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 08:47:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 08:47:08 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 65 pg[6.4( v 38'39 lc 36'15 (0'0,38'39] local-lis/les=64/65 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64) [1] r=0 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:08 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 65 pg[6.c( v 38'39 lc 36'17 (0'0,38'39] local-lis/les=64/65 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64) [1] r=0 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 21 08:47:09 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 21 08:47:09 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 21 08:47:09 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 21 08:47:10 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 21 08:47:10 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 21 08:47:10 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 21 08:47:10 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 21 08:47:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 08:47:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 08:47:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:47:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:47:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:47:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:47:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:47:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:47:11 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 66 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=9.361306190s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=38'39 active pruub 108.017868042s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:11 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 66 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:11 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 66 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=9.361262321s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=38'39 unknown NOTIFY pruub 108.017868042s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:11 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 66 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=9.360560417s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=38'39 active pruub 108.018013000s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:11 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 66 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=9.360506058s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=38'39 unknown NOTIFY pruub 108.018013000s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:11 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 66 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 488 B/s, 2 keys/s, 2 objects/s recovering
Jan 21 08:47:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 21 08:47:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 21 08:47:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 21 08:47:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 21 08:47:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 21 08:47:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 08:47:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 08:47:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 21 08:47:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 21 08:47:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 21 08:47:13 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.078385353s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 active pruub 110.709602356s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:13 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.078169823s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 110.709602356s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:13 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077580452s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 active pruub 110.709732056s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:13 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077262878s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 110.709732056s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:13 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077105522s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 active pruub 110.709732056s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:13 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077071190s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 110.709732056s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:13 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077226639s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 active pruub 110.710494995s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:13 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077081680s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 110.710494995s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:13 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 21 08:47:13 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [2] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:13 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [2] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:13 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [2] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:13 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [2] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:13 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 67 pg[6.5( v 38'39 lc 36'11 (0'0,38'39] local-lis/les=66/67 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:13 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 67 pg[6.d( v 38'39 lc 36'13 (0'0,38'39] local-lis/les=66/67 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 402 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 08:47:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 21 08:47:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 21 08:47:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 21 08:47:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 21 08:47:13 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 21 08:47:13 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 21 08:47:13 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 21 08:47:13 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 21 08:47:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 21 08:47:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 08:47:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 08:47:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 21 08:47:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 08:47:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 08:47:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 21 08:47:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 21 08:47:14 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:14 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:14 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:14 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:14 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:14 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.484229088s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 active pruub 117.064270020s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=68 pruub=9.587678909s) [2] r=-1 lpr=68 pi=[59,68)/1 crt=42'483 active pruub 116.167907715s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=68 pruub=9.587616920s) [2] r=-1 lpr=68 pi=[59,68)/1 crt=42'483 unknown NOTIFY pruub 116.167907715s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=68) [2] r=0 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:14 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.483852386s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 unknown NOTIFY pruub 117.064270020s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:14 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.482997894s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 active pruub 117.064353943s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.482966423s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 unknown NOTIFY pruub 117.064353943s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:14 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.479993820s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 active pruub 117.060600281s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:14 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.479021072s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 unknown NOTIFY pruub 117.060600281s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68) [2] r=0 lpr=68 pi=[60,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68) [2] r=0 lpr=68 pi=[60,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68) [2] r=0 lpr=68 pi=[60,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:14 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 21 08:47:14 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 21 08:47:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 21 08:47:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 21 08:47:15 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 21 08:47:15 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:15 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:15 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:15 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:15 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:15 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:15 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:15 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 08:47:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 08:47:15 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:15 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:15 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:15 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:15 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:15 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:15 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:15 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:15 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 69 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=68/69 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:15 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 69 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=68/69 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:15 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 69 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=68/69 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:15 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 69 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=68/69 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 0 objects/s recovering
Jan 21 08:47:15 np0005590528 python3[97991]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:47:15 np0005590528 podman[97992]: 2026-01-21 13:47:15.744408661 +0000 UTC m=+0.049783056 container create 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:47:15 np0005590528 systemd[1]: Started libpod-conmon-4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b.scope.
Jan 21 08:47:15 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:47:15 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3878e462b35c765b9bed9e20a447c90005f06317252411a06e625a2c0127f696/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:15 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3878e462b35c765b9bed9e20a447c90005f06317252411a06e625a2c0127f696/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:15 np0005590528 podman[97992]: 2026-01-21 13:47:15.72116859 +0000 UTC m=+0.026543085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:47:15 np0005590528 podman[97992]: 2026-01-21 13:47:15.825403616 +0000 UTC m=+0.130778041 container init 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:47:15 np0005590528 podman[97992]: 2026-01-21 13:47:15.836905324 +0000 UTC m=+0.142279729 container start 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:47:15 np0005590528 podman[97992]: 2026-01-21 13:47:15.840928034 +0000 UTC m=+0.146302499 container attach 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 21 08:47:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 21 08:47:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 21 08:47:16 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 21 08:47:16 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.001122475s) [2] async=[2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 0'0 active pruub 118.653938293s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.001036644s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 118.653938293s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:16 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=68/69 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.007452011s) [2] async=[2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 0'0 active pruub 118.661338806s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=68/69 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.007366180s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 118.661338806s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:16 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.e( v 69'485 (0'0,69'485] local-lis/les=68/69 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.007286072s) [2] async=[2] r=-1 lpr=70 pi=[52,70)/1 crt=69'484 lcod 69'484 active pruub 118.661315918s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.e( v 69'485 (0'0,69'485] local-lis/les=68/69 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.007211685s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=69'484 lcod 69'484 unknown NOTIFY pruub 118.661315918s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:16 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.1e( v 69'484 (0'0,69'484] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.006855011s) [2] async=[2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 42'483 active pruub 118.661369324s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.1e( v 69'484 (0'0,69'484] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.006767273s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 42'483 unknown NOTIFY pruub 118.661369324s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.e( v 69'485 (0'0,69'485] local-lis/les=0/0 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 pct=0'0 crt=69'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.e( v 69'485 (0'0,69'485] local-lis/les=0/0 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=69'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.1e( v 69'484 (0'0,69'484] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.1e( v 69'484 (0'0,69'484] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 70 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=69/70 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 70 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=69/70 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 70 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 70 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[59,69)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 21 08:47:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 21 08:47:16 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.f( v 70'484 (0'0,70'484] local-lis/les=0/0 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.f( v 70'484 (0'0,70'484] local-lis/les=0/0 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.7( v 70'484 (0'0,70'484] local-lis/les=0/0 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.7( v 70'484 (0'0,70'484] local-lis/les=0/0 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=15.640702248s) [2] async=[2] r=-1 lpr=71 pi=[59,71)/1 crt=42'483 active pruub 124.440063477s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=15.640651703s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=42'483 unknown NOTIFY pruub 124.440063477s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.7( v 70'484 (0'0,70'484] local-lis/les=69/70 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.634465218s) [2] async=[2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 lcod 42'483 active pruub 124.433929443s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.7( v 70'484 (0'0,70'484] local-lis/les=69/70 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.634424210s) [2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 lcod 42'483 unknown NOTIFY pruub 124.433929443s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.f( v 70'484 (0'0,70'484] local-lis/les=69/70 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.640273094s) [2] async=[2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 lcod 42'483 active pruub 124.439979553s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.f( v 70'484 (0'0,70'484] local-lis/les=69/70 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.640201569s) [2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 lcod 42'483 unknown NOTIFY pruub 124.439979553s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.640139580s) [2] async=[2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 active pruub 124.440063477s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:16 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.640014648s) [2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 unknown NOTIFY pruub 124.440063477s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.e( v 69'485 (0'0,69'485] local-lis/les=70/71 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=69'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=70/71 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=70/71 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.1e( v 69'484 (0'0,69'484] local-lis/les=70/71 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=69'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 21 08:47:16 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 21 08:47:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 objects/s recovering
Jan 21 08:47:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 21 08:47:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 21 08:47:17 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 21 08:47:17 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 72 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:17 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 72 pg[9.7( v 70'484 (0'0,70'484] local-lis/les=71/72 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=70'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:17 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 72 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:17 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 72 pg[9.f( v 70'484 (0'0,70'484] local-lis/les=71/72 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=70'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:17 np0005590528 dazzling_goldwasser[98007]: could not fetch user info: no user info saved
Jan 21 08:47:17 np0005590528 systemd[1]: libpod-4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b.scope: Deactivated successfully.
Jan 21 08:47:17 np0005590528 podman[97992]: 2026-01-21 13:47:17.661493997 +0000 UTC m=+1.966868422 container died 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 08:47:17 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3878e462b35c765b9bed9e20a447c90005f06317252411a06e625a2c0127f696-merged.mount: Deactivated successfully.
Jan 21 08:47:17 np0005590528 podman[97992]: 2026-01-21 13:47:17.704179873 +0000 UTC m=+2.009554298 container remove 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:47:17 np0005590528 systemd[1]: libpod-conmon-4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b.scope: Deactivated successfully.
Jan 21 08:47:17 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 21 08:47:17 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 21 08:47:18 np0005590528 python3[98130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:47:18 np0005590528 podman[98131]: 2026-01-21 13:47:18.11763845 +0000 UTC m=+0.050669947 container create d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 08:47:18 np0005590528 systemd[1]: Started libpod-conmon-d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3.scope.
Jan 21 08:47:18 np0005590528 podman[98131]: 2026-01-21 13:47:18.091327802 +0000 UTC m=+0.024359379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 08:47:18 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:47:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a2b0effbf3107bf23fe92e5ba99312efa93251fcbd051b4686ae470fb91666/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a2b0effbf3107bf23fe92e5ba99312efa93251fcbd051b4686ae470fb91666/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:18 np0005590528 podman[98131]: 2026-01-21 13:47:18.213931558 +0000 UTC m=+0.146963105 container init d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 08:47:18 np0005590528 podman[98131]: 2026-01-21 13:47:18.22402539 +0000 UTC m=+0.157056897 container start d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:47:18 np0005590528 podman[98131]: 2026-01-21 13:47:18.228407059 +0000 UTC m=+0.161438656 container attach d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]: {
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "user_id": "openstack",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "display_name": "openstack",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "email": "",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "suspended": 0,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "max_buckets": 1000,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "subusers": [],
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "keys": [
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        {
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:            "user": "openstack",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:            "access_key": "FZ28JX2UU0J6W10EUYA1",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:            "secret_key": "srDBpy6DXA8pK55MZ5QaJF72pcBfM6bJTQjo7e80",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:            "active": true,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:            "create_date": "2026-01-21T13:47:18.478009Z"
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        }
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    ],
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "swift_keys": [],
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "caps": [],
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "op_mask": "read, write, delete",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "default_placement": "",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "default_storage_class": "",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "placement_tags": [],
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "bucket_quota": {
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        "enabled": false,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        "check_on_raw": false,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        "max_size": -1,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        "max_size_kb": 0,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        "max_objects": -1
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    },
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "user_quota": {
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        "enabled": false,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        "check_on_raw": false,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        "max_size": -1,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        "max_size_kb": 0,
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:        "max_objects": -1
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    },
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "temp_url_keys": [],
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "type": "rgw",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "mfa_ids": [],
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "account_id": "",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "path": "/",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "create_date": "2026-01-21T13:47:18.477459Z",
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "tags": [],
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]:    "group_ids": []
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]: }
Jan 21 08:47:18 np0005590528 compassionate_mcclintock[98146]: 
Jan 21 08:47:18 np0005590528 systemd[1]: libpod-d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3.scope: Deactivated successfully.
Jan 21 08:47:18 np0005590528 podman[98131]: 2026-01-21 13:47:18.509133457 +0000 UTC m=+0.442164964 container died d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 08:47:18 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c6a2b0effbf3107bf23fe92e5ba99312efa93251fcbd051b4686ae470fb91666-merged.mount: Deactivated successfully.
Jan 21 08:47:18 np0005590528 podman[98131]: 2026-01-21 13:47:18.556291416 +0000 UTC m=+0.489322953 container remove d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:47:18 np0005590528 systemd[1]: libpod-conmon-d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3.scope: Deactivated successfully.
Jan 21 08:47:18 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 21 08:47:18 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 21 08:47:18 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 21 08:47:18 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 21 08:47:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 74 op/s; 457 B/s, 11 objects/s recovering
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 21 08:47:19 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 21 08:47:19 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 21 08:47:19 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 21 08:47:19 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 73 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=73 pruub=8.519322395s) [2] r=-1 lpr=73 pi=[48,73)/1 crt=38'39 lcod 0'0 active pruub 120.453178406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:19 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 73 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=73 pruub=8.519268990s) [2] r=-1 lpr=73 pi=[48,73)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 120.453178406s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:19 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 73 pg[6.8( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=73) [2] r=0 lpr=73 pi=[48,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:20 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 73 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=11.078841209s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=42'483 lcod 0'0 active pruub 118.709938049s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:20 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 73 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=11.078646660s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 118.709938049s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:20 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 73 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=11.078823090s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=72'486 lcod 72'486 active pruub 118.710662842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:20 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 73 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=11.078744888s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=72'486 lcod 72'486 unknown NOTIFY pruub 118.710662842s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:20 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73) [2] r=0 lpr=73 pi=[52,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:20 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73) [2] r=0 lpr=73 pi=[52,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 21 08:47:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 08:47:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 08:47:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 21 08:47:20 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 21 08:47:20 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[52,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:20 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[52,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:20 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[52,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:20 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[52,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:20 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 74 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=73/74 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=73) [2] r=0 lpr=73 pi=[48,73)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:20 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 74 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=0 lpr=74 pi=[52,74)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:20 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 74 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=0 lpr=74 pi=[52,74)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:20 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 74 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=0 lpr=74 pi=[52,74)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:20 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 74 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=0 lpr=74 pi=[52,74)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:20 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 21 08:47:20 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.0 KiB/s wr, 92 op/s; 400 B/s, 9 objects/s recovering
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 21 08:47:21 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 75 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=75 pruub=14.949493408s) [0] r=-1 lpr=75 pi=[56,75)/1 crt=38'39 lcod 0'0 active pruub 124.018341064s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:21 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 75 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=75 pruub=14.949440956s) [0] r=-1 lpr=75 pi=[56,75)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 124.018341064s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:21 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 21 08:47:21 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 75 pg[6.9( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=75) [0] r=0 lpr=75 pi=[56,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 21 08:47:21 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 21 08:47:21 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 21 08:47:21 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 75 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=74/75 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[52,74)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:21 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 75 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=74/75 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[52,74)/1 crt=72'487 lcod 72'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 21 08:47:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 21 08:47:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 21 08:47:22 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 76 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=74/75 n=7 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76 pruub=14.991138458s) [2] async=[2] r=-1 lpr=76 pi=[52,76)/1 crt=42'483 lcod 0'0 active pruub 125.083801270s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:22 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 76 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=74/75 n=7 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76 pruub=14.991033554s) [2] r=-1 lpr=76 pi=[52,76)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 125.083801270s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:22 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 76 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=74/75 n=6 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76 pruub=14.992052078s) [2] async=[2] r=-1 lpr=76 pi=[52,76)/1 crt=72'487 lcod 72'486 active pruub 125.085105896s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:22 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 76 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=74/75 n=6 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76 pruub=14.992010117s) [2] r=-1 lpr=76 pi=[52,76)/1 crt=72'487 lcod 72'486 unknown NOTIFY pruub 125.085105896s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 08:47:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 08:47:22 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 76 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=75/76 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=75) [0] r=0 lpr=75 pi=[56,75)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:22 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 76 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:22 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 76 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:22 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 76 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 pct=0'0 crt=72'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:22 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 76 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 crt=72'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 32 op/s
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 21 08:47:23 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 77 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=48/25 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=8.283657074s) [0] r=-1 lpr=77 pi=[59,77)/1 crt=38'39 lcod 0'0 active pruub 119.388328552s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:23 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 77 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=48/25 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=8.283535957s) [0] r=-1 lpr=77 pi=[59,77)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 119.388328552s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 21 08:47:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 21 08:47:23 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 77 pg[6.a( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=59/59 les/c/f=60/60/0 sis=77) [0] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:23 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 77 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=76/77 n=6 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 crt=72'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:23 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 77 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=76/77 n=7 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 21 08:47:24 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 08:47:24 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 08:47:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 21 08:47:24 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 21 08:47:24 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 78 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=77/78 n=1 ec=48/25 lis/c=59/59 les/c/f=60/60/0 sis=77) [0] r=0 lpr=77 pi=[59,77)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 3 objects/s recovering
Jan 21 08:47:25 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 21 08:47:25 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 21 08:47:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:26 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 21 08:47:26 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 21 08:47:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 88 B/s, 2 objects/s recovering
Jan 21 08:47:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 1 objects/s recovering
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 21 08:47:29 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 21 08:47:29 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 79 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.063592911s) [1] r=-1 lpr=79 pi=[62,79)/1 crt=38'39 active pruub 131.136032104s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:29 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 79 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.063537598s) [1] r=-1 lpr=79 pi=[62,79)/1 crt=38'39 unknown NOTIFY pruub 131.136032104s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:29 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 79 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=79) [1] r=0 lpr=79 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:30 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 21 08:47:30 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 21 08:47:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 21 08:47:30 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 08:47:30 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 08:47:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 21 08:47:30 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 21 08:47:30 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 80 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=79/80 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=79) [1] r=0 lpr=79 pi=[62,79)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 21 08:47:31 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 21 08:47:31 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 21 08:47:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 21 08:47:32 np0005590528 systemd-logind[780]: New session 34 of user zuul.
Jan 21 08:47:32 np0005590528 systemd[1]: Started Session 34 of User zuul.
Jan 21 08:47:32 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 08:47:32 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 08:47:33 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 81 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81 pruub=14.136120796s) [2] r=-1 lpr=81 pi=[52,81)/1 crt=42'483 lcod 0'0 active pruub 134.710083008s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:33 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 81 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81 pruub=14.136970520s) [2] r=-1 lpr=81 pi=[52,81)/1 crt=72'486 lcod 72'486 active pruub 134.710968018s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:33 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 81 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81 pruub=14.136906624s) [2] r=-1 lpr=81 pi=[52,81)/1 crt=72'486 lcod 72'486 unknown NOTIFY pruub 134.710968018s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:33 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 81 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81 pruub=14.136027336s) [2] r=-1 lpr=81 pi=[52,81)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 134.710083008s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:33 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81) [2] r=0 lpr=81 pi=[52,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:33 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81) [2] r=0 lpr=81 pi=[52,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:33 np0005590528 python3.9[98396]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:47:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 21 08:47:33 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 21 08:47:33 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 82 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=48/25 lis/c=66/66 les/c/f=67/67/0 sis=82 pruub=11.302111626s) [1] r=-1 lpr=82 pi=[66,82)/1 crt=38'39 active pruub 137.414535522s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:33 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 82 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=48/25 lis/c=66/66 les/c/f=67/67/0 sis=82 pruub=11.302055359s) [1] r=-1 lpr=82 pi=[66,82)/1 crt=38'39 unknown NOTIFY pruub 137.414535522s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:33 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 82 pg[9.c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[52,82)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:33 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 82 pg[9.c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[52,82)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:33 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 82 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[52,82)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:33 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 82 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[52,82)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:33 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 82 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=66/66 les/c/f=67/67/0 sis=82) [1] r=0 lpr=82 pi=[66,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:33 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 82 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=0 lpr=82 pi=[52,82)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:33 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 82 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=0 lpr=82 pi=[52,82)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:33 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 82 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=0 lpr=82 pi=[52,82)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:33 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 82 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=0 lpr=82 pi=[52,82)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:34 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 21 08:47:34 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 21 08:47:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 21 08:47:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 21 08:47:34 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 21 08:47:34 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 08:47:34 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 08:47:34 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 83 pg[6.d( v 38'39 lc 36'13 (0'0,38'39] local-lis/les=82/83 n=1 ec=48/25 lis/c=66/66 les/c/f=67/67/0 sis=82) [1] r=0 lpr=82 pi=[66,82)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:34 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 83 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=82/83 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[52,82)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:34 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 83 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=82/83 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[52,82)/1 crt=72'487 lcod 72'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:35 np0005590528 python3.9[98664]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:47:35 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 21 08:47:35 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 21 08:47:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 1 peering, 2 unknown, 302 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 21 08:47:35 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 84 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=82/83 n=7 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84 pruub=14.987791061s) [2] async=[2] r=-1 lpr=84 pi=[52,84)/1 crt=42'483 lcod 0'0 active pruub 138.355255127s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:35 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 84 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=82/83 n=7 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84 pruub=14.987714767s) [2] r=-1 lpr=84 pi=[52,84)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 138.355255127s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:35 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 84 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=82/83 n=6 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84 pruub=14.985897064s) [2] async=[2] r=-1 lpr=84 pi=[52,84)/1 crt=72'487 lcod 72'486 active pruub 138.355270386s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:47:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:47:35 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 84 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=82/83 n=6 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84 pruub=14.985070229s) [2] r=-1 lpr=84 pi=[52,84)/1 crt=72'487 lcod 72'486 unknown NOTIFY pruub 138.355270386s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:35 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 84 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:35 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 84 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 pct=0'0 crt=72'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:35 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 84 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:35 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 84 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 crt=72'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:35 np0005590528 podman[98767]: 2026-01-21 13:47:35.952541966 +0000 UTC m=+0.071523949 container create 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:47:35 np0005590528 systemd[1]: Started libpod-conmon-59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295.scope.
Jan 21 08:47:36 np0005590528 podman[98767]: 2026-01-21 13:47:35.921783687 +0000 UTC m=+0.040765720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:47:36 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:47:36 np0005590528 podman[98767]: 2026-01-21 13:47:36.052152596 +0000 UTC m=+0.171134629 container init 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 21 08:47:36 np0005590528 podman[98767]: 2026-01-21 13:47:36.063316435 +0000 UTC m=+0.182298428 container start 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 21 08:47:36 np0005590528 podman[98767]: 2026-01-21 13:47:36.067813417 +0000 UTC m=+0.186795460 container attach 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:47:36 np0005590528 cranky_proskuriakova[98783]: 167 167
Jan 21 08:47:36 np0005590528 systemd[1]: libpod-59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295.scope: Deactivated successfully.
Jan 21 08:47:36 np0005590528 conmon[98783]: conmon 59dd83f5b3322c59d3ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295.scope/container/memory.events
Jan 21 08:47:36 np0005590528 podman[98767]: 2026-01-21 13:47:36.072541135 +0000 UTC m=+0.191523128 container died 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Jan 21 08:47:36 np0005590528 systemd[1]: var-lib-containers-storage-overlay-cd8a1a6118fb72bc31abeef056ec272ff6434d160161eaee5ce89212fd135eb7-merged.mount: Deactivated successfully.
Jan 21 08:47:36 np0005590528 podman[98767]: 2026-01-21 13:47:36.124910285 +0000 UTC m=+0.243892278 container remove 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 08:47:36 np0005590528 systemd[1]: libpod-conmon-59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295.scope: Deactivated successfully.
Jan 21 08:47:36 np0005590528 podman[98806]: 2026-01-21 13:47:36.336183557 +0000 UTC m=+0.052938655 container create 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:47:36 np0005590528 systemd[1]: Started libpod-conmon-806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002.scope.
Jan 21 08:47:36 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:47:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:36 np0005590528 podman[98806]: 2026-01-21 13:47:36.315367626 +0000 UTC m=+0.032122744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:47:36 np0005590528 podman[98806]: 2026-01-21 13:47:36.417110229 +0000 UTC m=+0.133865367 container init 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 08:47:36 np0005590528 podman[98806]: 2026-01-21 13:47:36.423775266 +0000 UTC m=+0.140530374 container start 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:47:36 np0005590528 podman[98806]: 2026-01-21 13:47:36.426597506 +0000 UTC m=+0.143352664 container attach 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 08:47:36 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 21 08:47:36 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 21 08:47:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:36 np0005590528 festive_hermann[98822]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:47:36 np0005590528 festive_hermann[98822]: --> All data devices are unavailable
Jan 21 08:47:36 np0005590528 systemd[1]: libpod-806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002.scope: Deactivated successfully.
Jan 21 08:47:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 21 08:47:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 21 08:47:36 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 21 08:47:36 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 85 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=84/85 n=7 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:36 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 85 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=84/85 n=6 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 crt=72'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:36 np0005590528 podman[98845]: 2026-01-21 13:47:36.957094979 +0000 UTC m=+0.026057362 container died 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:47:36 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e-merged.mount: Deactivated successfully.
Jan 21 08:47:36 np0005590528 podman[98845]: 2026-01-21 13:47:36.994651668 +0000 UTC m=+0.063614041 container remove 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 08:47:37 np0005590528 systemd[1]: libpod-conmon-806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002.scope: Deactivated successfully.
Jan 21 08:47:37 np0005590528 podman[98921]: 2026-01-21 13:47:37.485660913 +0000 UTC m=+0.063179050 container create 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 08:47:37 np0005590528 systemd[1]: Started libpod-conmon-1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b.scope.
Jan 21 08:47:37 np0005590528 podman[98921]: 2026-01-21 13:47:37.464800631 +0000 UTC m=+0.042318768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:47:37 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:47:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 1 peering, 2 unknown, 302 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 08:47:37 np0005590528 podman[98921]: 2026-01-21 13:47:37.582023891 +0000 UTC m=+0.159542028 container init 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:47:37 np0005590528 podman[98921]: 2026-01-21 13:47:37.593403727 +0000 UTC m=+0.170921854 container start 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 08:47:37 np0005590528 wonderful_mccarthy[98937]: 167 167
Jan 21 08:47:37 np0005590528 podman[98921]: 2026-01-21 13:47:37.599176501 +0000 UTC m=+0.176694628 container attach 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 08:47:37 np0005590528 systemd[1]: libpod-1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b.scope: Deactivated successfully.
Jan 21 08:47:37 np0005590528 podman[98921]: 2026-01-21 13:47:37.600688869 +0000 UTC m=+0.178206996 container died 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:47:37 np0005590528 systemd[1]: var-lib-containers-storage-overlay-426e042bc2b257357f28aa74ea654af8d69fb37396433939175c13c1bc9c2467-merged.mount: Deactivated successfully.
Jan 21 08:47:37 np0005590528 podman[98921]: 2026-01-21 13:47:37.649198141 +0000 UTC m=+0.226716338 container remove 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 08:47:37 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 21 08:47:37 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 21 08:47:37 np0005590528 systemd[1]: libpod-conmon-1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b.scope: Deactivated successfully.
Jan 21 08:47:37 np0005590528 podman[98962]: 2026-01-21 13:47:37.893805496 +0000 UTC m=+0.060818831 container create 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:47:37 np0005590528 systemd[1]: Started libpod-conmon-63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64.scope.
Jan 21 08:47:37 np0005590528 podman[98962]: 2026-01-21 13:47:37.867608521 +0000 UTC m=+0.034621906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:47:37 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:47:37 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9588771528aa7efbff38149d99b25fa26895e1836a3627b399de8f20bd460e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:37 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9588771528aa7efbff38149d99b25fa26895e1836a3627b399de8f20bd460e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:37 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9588771528aa7efbff38149d99b25fa26895e1836a3627b399de8f20bd460e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:37 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9588771528aa7efbff38149d99b25fa26895e1836a3627b399de8f20bd460e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:38 np0005590528 podman[98962]: 2026-01-21 13:47:38.014592815 +0000 UTC m=+0.181606170 container init 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:47:38 np0005590528 podman[98962]: 2026-01-21 13:47:38.020613616 +0000 UTC m=+0.187626951 container start 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 08:47:38 np0005590528 podman[98962]: 2026-01-21 13:47:38.024722409 +0000 UTC m=+0.191735794 container attach 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]: {
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:    "0": [
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:        {
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "devices": [
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "/dev/loop3"
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            ],
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_name": "ceph_lv0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_size": "21470642176",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "name": "ceph_lv0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "tags": {
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.cluster_name": "ceph",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.crush_device_class": "",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.encrypted": "0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.objectstore": "bluestore",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.osd_id": "0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.type": "block",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.vdo": "0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.with_tpm": "0"
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            },
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "type": "block",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "vg_name": "ceph_vg0"
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:        }
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:    ],
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:    "1": [
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:        {
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "devices": [
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "/dev/loop4"
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            ],
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_name": "ceph_lv1",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_size": "21470642176",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "name": "ceph_lv1",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "tags": {
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.cluster_name": "ceph",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.crush_device_class": "",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.encrypted": "0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.objectstore": "bluestore",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.osd_id": "1",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.type": "block",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.vdo": "0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.with_tpm": "0"
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            },
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "type": "block",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "vg_name": "ceph_vg1"
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:        }
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:    ],
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:    "2": [
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:        {
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "devices": [
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "/dev/loop5"
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            ],
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_name": "ceph_lv2",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_size": "21470642176",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "name": "ceph_lv2",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "tags": {
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.cluster_name": "ceph",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.crush_device_class": "",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.encrypted": "0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.objectstore": "bluestore",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.osd_id": "2",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.type": "block",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.vdo": "0",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:                "ceph.with_tpm": "0"
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            },
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "type": "block",
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:            "vg_name": "ceph_vg2"
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:        }
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]:    ]
Jan 21 08:47:38 np0005590528 inspiring_ritchie[98978]: }
Jan 21 08:47:38 np0005590528 systemd[1]: libpod-63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64.scope: Deactivated successfully.
Jan 21 08:47:38 np0005590528 podman[98962]: 2026-01-21 13:47:38.321799006 +0000 UTC m=+0.488812361 container died 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 08:47:38 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b9588771528aa7efbff38149d99b25fa26895e1836a3627b399de8f20bd460e4-merged.mount: Deactivated successfully.
Jan 21 08:47:38 np0005590528 podman[98962]: 2026-01-21 13:47:38.377440347 +0000 UTC m=+0.544453702 container remove 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:47:38 np0005590528 systemd[1]: libpod-conmon-63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64.scope: Deactivated successfully.
Jan 21 08:47:38 np0005590528 podman[99061]: 2026-01-21 13:47:38.900334619 +0000 UTC m=+0.050015872 container create 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:47:38 np0005590528 systemd[1]: Started libpod-conmon-5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653.scope.
Jan 21 08:47:38 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:47:38 np0005590528 podman[99061]: 2026-01-21 13:47:38.966249417 +0000 UTC m=+0.115930690 container init 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 08:47:38 np0005590528 podman[99061]: 2026-01-21 13:47:38.87759178 +0000 UTC m=+0.027273013 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:47:38 np0005590528 podman[99061]: 2026-01-21 13:47:38.977362324 +0000 UTC m=+0.127043557 container start 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:47:38 np0005590528 happy_shamir[99075]: 167 167
Jan 21 08:47:38 np0005590528 podman[99061]: 2026-01-21 13:47:38.983018996 +0000 UTC m=+0.132700259 container attach 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 08:47:38 np0005590528 systemd[1]: libpod-5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653.scope: Deactivated successfully.
Jan 21 08:47:38 np0005590528 podman[99061]: 2026-01-21 13:47:38.986089512 +0000 UTC m=+0.135770755 container died 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:47:39 np0005590528 systemd[1]: var-lib-containers-storage-overlay-41f9cbb39e4d34f80418c1a61515d93f2af4ebe74ebfe97818e330d54485bc31-merged.mount: Deactivated successfully.
Jan 21 08:47:39 np0005590528 podman[99061]: 2026-01-21 13:47:39.026602505 +0000 UTC m=+0.176283718 container remove 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 08:47:39 np0005590528 systemd[1]: libpod-conmon-5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653.scope: Deactivated successfully.
Jan 21 08:47:39 np0005590528 podman[99100]: 2026-01-21 13:47:39.178097342 +0000 UTC m=+0.043341704 container create e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:47:39 np0005590528 systemd[1]: Started libpod-conmon-e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d.scope.
Jan 21 08:47:39 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:47:39 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e659eaada6580e5fa0c13f38db5465a85d4268fa97f2ea29f87425e608832d6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:39 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e659eaada6580e5fa0c13f38db5465a85d4268fa97f2ea29f87425e608832d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:39 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e659eaada6580e5fa0c13f38db5465a85d4268fa97f2ea29f87425e608832d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:39 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e659eaada6580e5fa0c13f38db5465a85d4268fa97f2ea29f87425e608832d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:47:39 np0005590528 podman[99100]: 2026-01-21 13:47:39.162714398 +0000 UTC m=+0.027958780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:47:39 np0005590528 podman[99100]: 2026-01-21 13:47:39.268883482 +0000 UTC m=+0.134127864 container init e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 08:47:39 np0005590528 podman[99100]: 2026-01-21 13:47:39.278713778 +0000 UTC m=+0.143958140 container start e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:47:39 np0005590528 podman[99100]: 2026-01-21 13:47:39.290583675 +0000 UTC m=+0.155828067 container attach e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 08:47:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:47:39
Jan 21 08:47:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:47:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Some PGs (0.006557) are unknown; try again later
Jan 21 08:47:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 721 B/s wr, 18 op/s; 86 B/s, 2 objects/s recovering
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 21 08:47:39 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 21 08:47:39 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 21 08:47:39 np0005590528 lvm[99200]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:47:39 np0005590528 lvm[99200]: VG ceph_vg0 finished
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 21 08:47:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 21 08:47:39 np0005590528 lvm[99202]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:47:39 np0005590528 lvm[99202]: VG ceph_vg1 finished
Jan 21 08:47:39 np0005590528 lvm[99204]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:47:39 np0005590528 lvm[99204]: VG ceph_vg2 finished
Jan 21 08:47:40 np0005590528 jovial_rhodes[99117]: {}
Jan 21 08:47:40 np0005590528 podman[99100]: 2026-01-21 13:47:40.099642571 +0000 UTC m=+0.964886923 container died e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:47:40 np0005590528 systemd[1]: libpod-e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d.scope: Deactivated successfully.
Jan 21 08:47:40 np0005590528 systemd[1]: libpod-e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d.scope: Consumed 1.254s CPU time.
Jan 21 08:47:40 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e659eaada6580e5fa0c13f38db5465a85d4268fa97f2ea29f87425e608832d6f-merged.mount: Deactivated successfully.
Jan 21 08:47:40 np0005590528 podman[99100]: 2026-01-21 13:47:40.154214586 +0000 UTC m=+1.019458948 container remove e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 08:47:40 np0005590528 systemd[1]: libpod-conmon-e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d.scope: Deactivated successfully.
Jan 21 08:47:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:47:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:47:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:47:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:47:40 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 21 08:47:40 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:47:40 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 08:47:40 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 08:47:40 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:47:40 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:47:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:47:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Jan 21 08:47:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 21 08:47:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 21 08:47:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 21 08:47:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 21 08:47:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 21 08:47:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 08:47:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 08:47:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 21 08:47:41 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 21 08:47:42 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 87 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=87 pruub=12.918298721s) [2] r=-1 lpr=87 pi=[62,87)/1 crt=38'39 active pruub 147.136352539s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:42 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 87 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=87 pruub=12.918251991s) [2] r=-1 lpr=87 pi=[62,87)/1 crt=38'39 unknown NOTIFY pruub 147.136352539s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 21 08:47:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 21 08:47:42 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 87 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=87) [2] r=0 lpr=87 pi=[62,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:42 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 21 08:47:42 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 21 08:47:42 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 21 08:47:42 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 21 08:47:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 21 08:47:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 21 08:47:43 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 21 08:47:43 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 88 pg[6.f( v 38'39 lc 36'1 (0'0,38'39] local-lis/les=87/88 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=87) [2] r=0 lpr=87 pi=[62,87)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 08:47:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 08:47:43 np0005590528 systemd[1]: session-34.scope: Deactivated successfully.
Jan 21 08:47:43 np0005590528 systemd[1]: session-34.scope: Consumed 8.636s CPU time.
Jan 21 08:47:43 np0005590528 systemd-logind[780]: Session 34 logged out. Waiting for processes to exit.
Jan 21 08:47:43 np0005590528 systemd-logind[780]: Removed session 34.
Jan 21 08:47:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Jan 21 08:47:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 21 08:47:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 21 08:47:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 21 08:47:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 21 08:47:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 21 08:47:44 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 21 08:47:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 21 08:47:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 21 08:47:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 0 objects/s recovering
Jan 21 08:47:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 21 08:47:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 21 08:47:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 21 08:47:46 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 21 08:47:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 21 08:47:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 21 08:47:46 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 21 08:47:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 21 08:47:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 110 B/s, 0 objects/s recovering
Jan 21 08:47:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 21 08:47:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 21 08:47:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 21 08:47:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 21 08:47:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 21 08:47:48 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 21 08:47:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 21 08:47:48 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 21 08:47:48 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 21 08:47:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 21 08:47:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 21 08:47:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 21 08:47:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 21 08:47:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 21 08:47:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 21 08:47:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 21 08:47:50 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 21 08:47:50 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2408416314835177e-06 of space, bias 4.0, pg target 0.0014890099577802214 quantized to 16 (current 16)
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:47:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:47:50 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 92 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=92 pruub=14.150473595s) [2] r=-1 lpr=92 pi=[60,92)/1 crt=69'484 lcod 69'484 active pruub 157.065185547s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:50 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 92 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=92 pruub=14.150398254s) [2] r=-1 lpr=92 pi=[60,92)/1 crt=69'484 lcod 69'484 unknown NOTIFY pruub 157.065185547s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:50 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=92) [2] r=0 lpr=92 pi=[60,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 21 08:47:51 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 21 08:47:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 21 08:47:51 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 21 08:47:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[60,93)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:51 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[60,93)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:51 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 93 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=93) [2]/[0] r=0 lpr=93 pi=[60,93)/1 crt=69'484 lcod 69'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:51 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 93 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=93) [2]/[0] r=0 lpr=93 pi=[60,93)/1 crt=69'484 lcod 69'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:51 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 21 08:47:51 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 21 08:47:51 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 21 08:47:51 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 21 08:47:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:47:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 21 08:47:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 21 08:47:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 21 08:47:52 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 21 08:47:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 21 08:47:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 21 08:47:52 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 21 08:47:52 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 94 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=93/94 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[60,93)/1 crt=72'485 lcod 69'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:52 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 21 08:47:52 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 21 08:47:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 21 08:47:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 21 08:47:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 21 08:47:53 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 21 08:47:53 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 95 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=93/94 n=6 ec=52/36 lis/c=93/60 les/c/f=94/61/0 sis=95 pruub=15.251998901s) [2] async=[2] r=-1 lpr=95 pi=[60,95)/1 crt=72'485 lcod 69'484 active pruub 160.677017212s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:53 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 95 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=93/94 n=6 ec=52/36 lis/c=93/60 les/c/f=94/61/0 sis=95 pruub=15.251936913s) [2] r=-1 lpr=95 pi=[60,95)/1 crt=72'485 lcod 69'484 unknown NOTIFY pruub 160.677017212s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:53 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 95 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=0/0 n=6 ec=52/36 lis/c=93/60 les/c/f=94/61/0 sis=95) [2] r=0 lpr=95 pi=[60,95)/1 pct=0'0 crt=72'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:53 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 95 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=0/0 n=6 ec=52/36 lis/c=93/60 les/c/f=94/61/0 sis=95) [2] r=0 lpr=95 pi=[60,95)/1 crt=72'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:53 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 21 08:47:53 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 21 08:47:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:47:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 21 08:47:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 21 08:47:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 21 08:47:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 21 08:47:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 21 08:47:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 21 08:47:54 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 21 08:47:54 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 96 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=95/96 n=6 ec=52/36 lis/c=93/60 les/c/f=94/61/0 sis=95) [2] r=0 lpr=95 pi=[60,95)/1 crt=72'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:54 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 96 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=96 pruub=9.587777138s) [1] r=-1 lpr=96 pi=[59,96)/1 crt=42'483 active pruub 156.169006348s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:54 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 96 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=96 pruub=9.587731361s) [1] r=-1 lpr=96 pi=[59,96)/1 crt=42'483 unknown NOTIFY pruub 156.169006348s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:54 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=96) [1] r=0 lpr=96 pi=[59,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:54 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 21 08:47:54 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 21 08:47:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 21 08:47:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 21 08:47:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 21 08:47:55 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 21 08:47:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 97 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=97) [1]/[0] r=-1 lpr=97 pi=[59,97)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:55 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 97 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=97) [1]/[0] r=-1 lpr=97 pi=[59,97)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 97 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=97) [1]/[0] r=0 lpr=97 pi=[59,97)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:55 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 97 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=97) [1]/[0] r=0 lpr=97 pi=[59,97)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 0 objects/s recovering
Jan 21 08:47:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 21 08:47:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 21 08:47:56 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 21 08:47:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 98 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=97/98 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[59,97)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:56 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 21 08:47:56 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 21 08:47:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:47:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 21 08:47:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 21 08:47:56 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 21 08:47:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 99 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=97/98 n=6 ec=52/36 lis/c=97/59 les/c/f=98/60/0 sis=99 pruub=15.705821991s) [1] async=[1] r=-1 lpr=99 pi=[59,99)/1 crt=42'483 active pruub 164.488800049s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:56 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 99 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=97/98 n=6 ec=52/36 lis/c=97/59 les/c/f=98/60/0 sis=99 pruub=15.705730438s) [1] r=-1 lpr=99 pi=[59,99)/1 crt=42'483 unknown NOTIFY pruub 164.488800049s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 99 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=97/59 les/c/f=98/60/0 sis=99) [1] r=0 lpr=99 pi=[59,99)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:56 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 99 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=97/59 les/c/f=98/60/0 sis=99) [1] r=0 lpr=99 pi=[59,99)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 21 08:47:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 21 08:47:57 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 21 08:47:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:47:57 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 100 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=99/100 n=6 ec=52/36 lis/c=97/59 les/c/f=98/60/0 sis=99) [1] r=0 lpr=99 pi=[59,99)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:47:58 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 21 08:47:58 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 21 08:47:59 np0005590528 systemd-logind[780]: New session 35 of user zuul.
Jan 21 08:47:59 np0005590528 systemd[1]: Started Session 35 of User zuul.
Jan 21 08:47:59 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 21 08:47:59 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 21 08:47:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 473 B/s wr, 10 op/s; 50 B/s, 1 objects/s recovering
Jan 21 08:47:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 21 08:47:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 21 08:47:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 21 08:47:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 21 08:47:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 21 08:47:59 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 101 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=101 pruub=12.996678352s) [0] r=-1 lpr=101 pi=[70,101)/1 crt=42'483 active pruub 155.522689819s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:47:59 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 101 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=101 pruub=12.996642113s) [0] r=-1 lpr=101 pi=[70,101)/1 crt=42'483 unknown NOTIFY pruub 155.522689819s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:47:59 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=101) [0] r=0 lpr=101 pi=[70,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:47:59 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 21 08:47:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 21 08:47:59 np0005590528 python3.9[99434]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 21 08:48:00 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 21 08:48:00 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 21 08:48:00 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 21 08:48:00 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 21 08:48:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 21 08:48:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 21 08:48:00 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 21 08:48:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[70,102)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:00 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[70,102)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:00 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 102 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=0 lpr=102 pi=[70,102)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:00 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 102 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=0 lpr=102 pi=[70,102)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:00 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 21 08:48:01 np0005590528 python3.9[99608]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:48:01 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 21 08:48:01 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 21 08:48:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 408 B/s wr, 8 op/s; 43 B/s, 1 objects/s recovering
Jan 21 08:48:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 21 08:48:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 21 08:48:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 21 08:48:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 21 08:48:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 21 08:48:01 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 21 08:48:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 21 08:48:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 21 08:48:01 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 103 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=102/103 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] async=[0] r=0 lpr=102 pi=[70,102)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:48:02 np0005590528 python3.9[99764]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:48:02 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 21 08:48:02 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 21 08:48:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 21 08:48:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 21 08:48:02 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 21 08:48:02 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 104 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=102/103 n=6 ec=52/36 lis/c=102/70 les/c/f=103/71/0 sis=104 pruub=14.991653442s) [0] async=[0] r=-1 lpr=104 pi=[70,104)/1 crt=42'483 active pruub 160.563919067s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:02 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 104 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=102/103 n=6 ec=52/36 lis/c=102/70 les/c/f=103/71/0 sis=104 pruub=14.991371155s) [0] r=-1 lpr=104 pi=[70,104)/1 crt=42'483 unknown NOTIFY pruub 160.563919067s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:02 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 104 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=102/70 les/c/f=103/71/0 sis=104) [0] r=0 lpr=104 pi=[70,104)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:02 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 104 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=102/70 les/c/f=103/71/0 sis=104) [0] r=0 lpr=104 pi=[70,104)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:02 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 21 08:48:02 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 21 08:48:03 np0005590528 python3.9[99917]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:48:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 21 08:48:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 21 08:48:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 21 08:48:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 21 08:48:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 21 08:48:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 21 08:48:03 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 21 08:48:03 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 105 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=104/105 n=6 ec=52/36 lis/c=102/70 les/c/f=103/71/0 sis=104) [0] r=0 lpr=104 pi=[70,104)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:48:04 np0005590528 python3.9[100071]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:48:04 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 21 08:48:04 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 21 08:48:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 21 08:48:05 np0005590528 python3.9[100223]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:48:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 21 08:48:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 21 08:48:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 21 08:48:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 21 08:48:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 21 08:48:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 21 08:48:05 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 21 08:48:05 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 106 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=106 pruub=15.186129570s) [2] r=-1 lpr=106 pi=[60,106)/1 crt=72'486 lcod 72'486 active pruub 173.065551758s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:05 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 106 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=106 pruub=15.186038971s) [2] r=-1 lpr=106 pi=[60,106)/1 crt=72'486 lcod 72'486 unknown NOTIFY pruub 173.065551758s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:05 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=106) [2] r=0 lpr=106 pi=[60,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 21 08:48:05 np0005590528 python3.9[100373]: ansible-ansible.builtin.service_facts Invoked
Jan 21 08:48:05 np0005590528 network[100390]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 08:48:05 np0005590528 network[100391]: 'network-scripts' will be removed from distribution in near future.
Jan 21 08:48:05 np0005590528 network[100392]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 08:48:06 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 21 08:48:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 21 08:48:06 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 21 08:48:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 21 08:48:06 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 21 08:48:06 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 107 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=107) [2]/[0] r=0 lpr=107 pi=[60,107)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:06 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 107 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=107) [2]/[0] r=0 lpr=107 pi=[60,107)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:06 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[60,107)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:06 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[60,107)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:06 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 21 08:48:07 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 21 08:48:07 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 21 08:48:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 21 08:48:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 21 08:48:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 21 08:48:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 21 08:48:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 21 08:48:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 21 08:48:07 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 21 08:48:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 21 08:48:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 21 08:48:07 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 108 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=107/108 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[60,107)/1 crt=72'487 lcod 72'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:48:08 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 21 08:48:08 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 21 08:48:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 21 08:48:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 21 08:48:08 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 21 08:48:08 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 109 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=107/108 n=6 ec=52/36 lis/c=107/60 les/c/f=108/61/0 sis=109 pruub=14.968790054s) [2] async=[2] r=-1 lpr=109 pi=[60,109)/1 crt=72'487 lcod 72'486 active pruub 175.924285889s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:08 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 109 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=107/108 n=6 ec=52/36 lis/c=107/60 les/c/f=108/61/0 sis=109 pruub=14.968612671s) [2] r=-1 lpr=109 pi=[60,109)/1 crt=72'487 lcod 72'486 unknown NOTIFY pruub 175.924285889s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:08 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 109 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=107/60 les/c/f=108/61/0 sis=109) [2] r=0 lpr=109 pi=[60,109)/1 pct=0'0 crt=72'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:08 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 109 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=107/60 les/c/f=108/61/0 sis=109) [2] r=0 lpr=109 pi=[60,109)/1 crt=72'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 21 08:48:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 21 08:48:09 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 21 08:48:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 21 08:48:09 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 21 08:48:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 21 08:48:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 21 08:48:09 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 21 08:48:09 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 21 08:48:09 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 110 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=109/110 n=6 ec=52/36 lis/c=107/60 les/c/f=108/61/0 sis=109) [2] r=0 lpr=109 pi=[60,109)/1 crt=72'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:48:09 np0005590528 python3.9[100652]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:48:10 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 21 08:48:10 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 21 08:48:10 np0005590528 python3.9[100802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:48:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 21 08:48:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:48:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:48:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:48:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:48:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:48:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:48:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 21 08:48:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 21 08:48:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 21 08:48:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 21 08:48:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 21 08:48:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 21 08:48:11 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 21 08:48:11 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 111 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=84/85 n=6 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=111 pruub=13.138989449s) [0] r=-1 lpr=111 pi=[84,111)/1 crt=72'487 active pruub 167.878814697s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:11 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 111 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=84/85 n=6 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=111 pruub=13.138909340s) [0] r=-1 lpr=111 pi=[84,111)/1 crt=72'487 unknown NOTIFY pruub 167.878814697s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:11 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=111) [0] r=0 lpr=111 pi=[84,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:11 np0005590528 python3.9[100956]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:48:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 21 08:48:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 21 08:48:12 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 21 08:48:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 21 08:48:12 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[84,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:12 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[84,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:12 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 112 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=84/85 n=6 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=112) [0]/[2] r=0 lpr=112 pi=[84,112)/1 crt=72'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:12 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 112 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=84/85 n=6 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=112) [0]/[2] r=0 lpr=112 pi=[84,112)/1 crt=72'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:12 np0005590528 python3.9[101114]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:48:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 21 08:48:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 21 08:48:13 np0005590528 python3.9[101198]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:48:13 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 21 08:48:13 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 21 08:48:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 21 08:48:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 21 08:48:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 21 08:48:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 21 08:48:13 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 21 08:48:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 113 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=112/113 n=6 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[84,112)/1 crt=72'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:48:14 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 21 08:48:14 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 21 08:48:14 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 21 08:48:14 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 21 08:48:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 21 08:48:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 21 08:48:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 21 08:48:14 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 21 08:48:14 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 114 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=112/84 les/c/f=113/85/0 sis=114) [0] r=0 lpr=114 pi=[84,114)/1 pct=0'0 crt=72'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:14 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 114 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=112/84 les/c/f=113/85/0 sis=114) [0] r=0 lpr=114 pi=[84,114)/1 crt=72'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 114 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=112/113 n=6 ec=52/36 lis/c=112/84 les/c/f=113/85/0 sis=114 pruub=15.188015938s) [0] async=[0] r=-1 lpr=114 pi=[84,114)/1 crt=72'487 active pruub 172.965347290s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:14 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 114 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=112/113 n=6 ec=52/36 lis/c=112/84 les/c/f=113/85/0 sis=114 pruub=15.187899590s) [0] r=-1 lpr=114 pi=[84,114)/1 crt=72'487 unknown NOTIFY pruub 172.965347290s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:15 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 21 08:48:15 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 21 08:48:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Jan 21 08:48:15 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 21 08:48:15 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 21 08:48:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 21 08:48:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 21 08:48:15 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 21 08:48:15 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 115 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=114/115 n=6 ec=52/36 lis/c=112/84 les/c/f=113/85/0 sis=114) [0] r=0 lpr=114 pi=[84,114)/1 crt=72'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:48:16 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 21 08:48:16 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 21 08:48:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:17 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 21 08:48:17 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 21 08:48:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 1 objects/s recovering
Jan 21 08:48:17 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 21 08:48:17 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 21 08:48:18 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 21 08:48:18 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 21 08:48:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 138 B/s, 2 objects/s recovering
Jan 21 08:48:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 21 08:48:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 21 08:48:19 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 21 08:48:19 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 21 08:48:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 21 08:48:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 21 08:48:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 21 08:48:19 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 21 08:48:19 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 116 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=116 pruub=8.702566147s) [0] r=-1 lpr=116 pi=[70,116)/1 crt=69'484 lcod 69'484 active pruub 171.524093628s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:19 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 116 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=116 pruub=8.702425003s) [0] r=-1 lpr=116 pi=[70,116)/1 crt=69'484 lcod 69'484 unknown NOTIFY pruub 171.524093628s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 21 08:48:19 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=116) [0] r=0 lpr=116 pi=[70,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:20 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 21 08:48:20 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 21 08:48:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 21 08:48:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 21 08:48:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 21 08:48:20 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[70,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:20 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[70,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:20 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 21 08:48:20 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 117 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=117) [0]/[2] r=0 lpr=117 pi=[70,117)/1 crt=69'484 lcod 69'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:20 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 117 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=117) [0]/[2] r=0 lpr=117 pi=[70,117)/1 crt=69'484 lcod 69'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 21 08:48:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 08:48:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:48:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:21 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 21 08:48:21 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 21 08:48:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 21 08:48:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:48:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 21 08:48:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 21 08:48:21 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 08:48:21 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 118 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=118 pruub=15.664496422s) [1] r=-1 lpr=118 pi=[71,118)/1 crt=42'483 active pruub 180.526473999s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:21 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 118 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=118 pruub=15.664244652s) [1] r=-1 lpr=118 pi=[71,118)/1 crt=42'483 unknown NOTIFY pruub 180.526473999s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:21 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 118 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=117/118 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[70,117)/1 crt=72'485 lcod 69'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:48:21 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=118) [1] r=0 lpr=118 pi=[71,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 21 08:48:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 08:48:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 21 08:48:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 21 08:48:22 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[71,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:22 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[71,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:22 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 119 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=119) [1]/[2] r=0 lpr=119 pi=[71,119)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:22 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 119 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=119) [1]/[2] r=0 lpr=119 pi=[71,119)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:22 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 119 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=117/118 n=6 ec=52/36 lis/c=117/70 les/c/f=118/71/0 sis=119 pruub=14.974673271s) [0] async=[0] r=-1 lpr=119 pi=[70,119)/1 crt=72'485 lcod 69'484 active pruub 180.866104126s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:22 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 119 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=117/118 n=6 ec=52/36 lis/c=117/70 les/c/f=118/71/0 sis=119 pruub=14.974523544s) [0] r=-1 lpr=119 pi=[70,119)/1 crt=72'485 lcod 69'484 unknown NOTIFY pruub 180.866104126s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:22 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 119 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=0/0 n=6 ec=52/36 lis/c=117/70 les/c/f=118/71/0 sis=119) [0] r=0 lpr=119 pi=[70,119)/1 pct=0'0 crt=72'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:22 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 119 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=0/0 n=6 ec=52/36 lis/c=117/70 les/c/f=118/71/0 sis=119) [0] r=0 lpr=119 pi=[70,119)/1 crt=72'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:23 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 21 08:48:23 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 21 08:48:23 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 21 08:48:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:23 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 21 08:48:23 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 21 08:48:23 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 21 08:48:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 21 08:48:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 21 08:48:23 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 21 08:48:23 np0005590528 ceph-osd[85740]: osd.0 pg_epoch: 120 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=119/120 n=6 ec=52/36 lis/c=117/70 les/c/f=118/71/0 sis=119) [0] r=0 lpr=119 pi=[70,119)/1 crt=72'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:48:24 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 120 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=119/120 n=6 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[71,119)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:48:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 21 08:48:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 21 08:48:24 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 21 08:48:24 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 121 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=119/71 les/c/f=120/72/0 sis=121) [1] r=0 lpr=121 pi=[71,121)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:24 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 121 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=119/71 les/c/f=120/72/0 sis=121) [1] r=0 lpr=121 pi=[71,121)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 08:48:24 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 121 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=119/120 n=6 ec=52/36 lis/c=119/71 les/c/f=120/72/0 sis=121 pruub=15.554893494s) [1] async=[1] r=-1 lpr=121 pi=[71,121)/1 crt=42'483 active pruub 183.464660645s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 08:48:24 np0005590528 ceph-osd[87843]: osd.2 pg_epoch: 121 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=119/120 n=6 ec=52/36 lis/c=119/71 les/c/f=120/72/0 sis=121 pruub=15.554841042s) [1] r=-1 lpr=121 pi=[71,121)/1 crt=42'483 unknown NOTIFY pruub 183.464660645s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 08:48:25 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 21 08:48:25 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 21 08:48:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Jan 21 08:48:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 21 08:48:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 21 08:48:25 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 21 08:48:25 np0005590528 ceph-osd[86795]: osd.1 pg_epoch: 122 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=121/122 n=6 ec=52/36 lis/c=119/71 les/c/f=120/72/0 sis=121) [1] r=0 lpr=121 pi=[71,121)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 08:48:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:27 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 21 08:48:27 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 21 08:48:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 1 objects/s recovering
Jan 21 08:48:29 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 21 08:48:29 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 21 08:48:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Jan 21 08:48:29 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 21 08:48:29 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 21 08:48:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 B/s, 1 objects/s recovering
Jan 21 08:48:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:32 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 21 08:48:32 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 21 08:48:32 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 21 08:48:32 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 21 08:48:33 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 21 08:48:33 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 21 08:48:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 21 08:48:33 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 21 08:48:33 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 21 08:48:34 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 21 08:48:34 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 21 08:48:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 21 08:48:36 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 21 08:48:36 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 21 08:48:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:37 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 21 08:48:37 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 21 08:48:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 21 08:48:38 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 21 08:48:38 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 21 08:48:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:48:39
Jan 21 08:48:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:48:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:48:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.mgr', 'volumes', 'vms', '.rgw.root', 'default.rgw.control']
Jan 21 08:48:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:48:39 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 21 08:48:39 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 21 08:48:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 21 08:48:40 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 21 08:48:40 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:48:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:48:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:48:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:48:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:48:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:48:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:48:41 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 21 08:48:41 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 21 08:48:41 np0005590528 podman[101487]: 2026-01-21 13:48:41.569330363 +0000 UTC m=+0.085103540 container create 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 08:48:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:48:41 np0005590528 podman[101487]: 2026-01-21 13:48:41.521441146 +0000 UTC m=+0.037214363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:48:41 np0005590528 systemd[1]: Started libpod-conmon-923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd.scope.
Jan 21 08:48:41 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:48:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:41 np0005590528 podman[101487]: 2026-01-21 13:48:41.706366794 +0000 UTC m=+0.222140001 container init 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:48:41 np0005590528 podman[101487]: 2026-01-21 13:48:41.719163904 +0000 UTC m=+0.234937061 container start 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:48:41 np0005590528 beautiful_goldwasser[101503]: 167 167
Jan 21 08:48:41 np0005590528 systemd[1]: libpod-923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd.scope: Deactivated successfully.
Jan 21 08:48:41 np0005590528 podman[101487]: 2026-01-21 13:48:41.765304036 +0000 UTC m=+0.281077243 container attach 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 08:48:41 np0005590528 podman[101487]: 2026-01-21 13:48:41.767285907 +0000 UTC m=+0.283059074 container died 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 08:48:41 np0005590528 systemd[1]: var-lib-containers-storage-overlay-8feffee9f4e8a24d93f948d616d6673b2258680de1fccd5ad13438e7a92e3102-merged.mount: Deactivated successfully.
Jan 21 08:48:41 np0005590528 podman[101487]: 2026-01-21 13:48:41.932964508 +0000 UTC m=+0.448737665 container remove 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:48:41 np0005590528 systemd[1]: libpod-conmon-923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd.scope: Deactivated successfully.
Jan 21 08:48:42 np0005590528 podman[101529]: 2026-01-21 13:48:42.197458332 +0000 UTC m=+0.089536505 container create 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:48:42 np0005590528 podman[101529]: 2026-01-21 13:48:42.147294035 +0000 UTC m=+0.039372238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:48:42 np0005590528 systemd[1]: Started libpod-conmon-9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83.scope.
Jan 21 08:48:42 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:48:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:42 np0005590528 podman[101529]: 2026-01-21 13:48:42.356784227 +0000 UTC m=+0.248862480 container init 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:48:42 np0005590528 podman[101529]: 2026-01-21 13:48:42.368806208 +0000 UTC m=+0.260884421 container start 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:48:42 np0005590528 podman[101529]: 2026-01-21 13:48:42.388282031 +0000 UTC m=+0.280360434 container attach 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:48:42 np0005590528 upbeat_northcutt[101547]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:48:42 np0005590528 upbeat_northcutt[101547]: --> All data devices are unavailable
Jan 21 08:48:42 np0005590528 systemd[1]: libpod-9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83.scope: Deactivated successfully.
Jan 21 08:48:42 np0005590528 podman[101529]: 2026-01-21 13:48:42.949028738 +0000 UTC m=+0.841106951 container died 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:48:43 np0005590528 systemd[1]: var-lib-containers-storage-overlay-a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134-merged.mount: Deactivated successfully.
Jan 21 08:48:43 np0005590528 podman[101529]: 2026-01-21 13:48:43.111589559 +0000 UTC m=+1.003667732 container remove 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 08:48:43 np0005590528 systemd[1]: libpod-conmon-9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83.scope: Deactivated successfully.
Jan 21 08:48:43 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 21 08:48:43 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 21 08:48:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:43 np0005590528 podman[101646]: 2026-01-21 13:48:43.665065898 +0000 UTC m=+0.068110881 container create a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 08:48:43 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 21 08:48:43 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 21 08:48:43 np0005590528 podman[101646]: 2026-01-21 13:48:43.624185462 +0000 UTC m=+0.027230465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:48:43 np0005590528 systemd[1]: Started libpod-conmon-a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae.scope.
Jan 21 08:48:43 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:48:43 np0005590528 podman[101646]: 2026-01-21 13:48:43.791249988 +0000 UTC m=+0.194295021 container init a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:48:43 np0005590528 podman[101646]: 2026-01-21 13:48:43.797688654 +0000 UTC m=+0.200733627 container start a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 08:48:43 np0005590528 agitated_benz[101662]: 167 167
Jan 21 08:48:43 np0005590528 systemd[1]: libpod-a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae.scope: Deactivated successfully.
Jan 21 08:48:43 np0005590528 podman[101646]: 2026-01-21 13:48:43.822748921 +0000 UTC m=+0.225793934 container attach a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 08:48:43 np0005590528 podman[101646]: 2026-01-21 13:48:43.823181083 +0000 UTC m=+0.226226066 container died a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:48:43 np0005590528 systemd[1]: var-lib-containers-storage-overlay-8359b6be5575aa59c7e77252a7c859dfd42547dea463eaccb1d9fed9b6c5913b-merged.mount: Deactivated successfully.
Jan 21 08:48:44 np0005590528 podman[101646]: 2026-01-21 13:48:44.34137179 +0000 UTC m=+0.744416743 container remove a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:48:44 np0005590528 systemd[1]: libpod-conmon-a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae.scope: Deactivated successfully.
Jan 21 08:48:44 np0005590528 podman[101688]: 2026-01-21 13:48:44.510701435 +0000 UTC m=+0.040585530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:48:44 np0005590528 podman[101688]: 2026-01-21 13:48:44.673046819 +0000 UTC m=+0.202930854 container create a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:48:44 np0005590528 systemd[1]: Started libpod-conmon-a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10.scope.
Jan 21 08:48:44 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:48:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263783f6275288d88e1bb81515829c5cf2d9b0fae56efa8e58fc36b870fd07fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263783f6275288d88e1bb81515829c5cf2d9b0fae56efa8e58fc36b870fd07fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263783f6275288d88e1bb81515829c5cf2d9b0fae56efa8e58fc36b870fd07fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263783f6275288d88e1bb81515829c5cf2d9b0fae56efa8e58fc36b870fd07fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:45 np0005590528 podman[101688]: 2026-01-21 13:48:45.241147667 +0000 UTC m=+0.771031712 container init a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:48:45 np0005590528 podman[101688]: 2026-01-21 13:48:45.25332455 +0000 UTC m=+0.783208585 container start a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 08:48:45 np0005590528 podman[101688]: 2026-01-21 13:48:45.296884726 +0000 UTC m=+0.826768761 container attach a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:48:45 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]: {
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:    "0": [
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:        {
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "devices": [
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "/dev/loop3"
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            ],
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_name": "ceph_lv0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_size": "21470642176",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "name": "ceph_lv0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "tags": {
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.cluster_name": "ceph",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.crush_device_class": "",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.encrypted": "0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.objectstore": "bluestore",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.osd_id": "0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.type": "block",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.vdo": "0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.with_tpm": "0"
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            },
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "type": "block",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "vg_name": "ceph_vg0"
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:        }
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:    ],
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:    "1": [
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:        {
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "devices": [
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "/dev/loop4"
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            ],
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_name": "ceph_lv1",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_size": "21470642176",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "name": "ceph_lv1",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "tags": {
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.cluster_name": "ceph",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.crush_device_class": "",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.encrypted": "0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.objectstore": "bluestore",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.osd_id": "1",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.type": "block",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.vdo": "0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.with_tpm": "0"
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            },
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "type": "block",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "vg_name": "ceph_vg1"
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:        }
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:    ],
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:    "2": [
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:        {
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "devices": [
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "/dev/loop5"
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            ],
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_name": "ceph_lv2",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_size": "21470642176",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "name": "ceph_lv2",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "tags": {
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.cluster_name": "ceph",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.crush_device_class": "",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.encrypted": "0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.objectstore": "bluestore",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.osd_id": "2",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.type": "block",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.vdo": "0",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:                "ceph.with_tpm": "0"
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            },
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "type": "block",
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:            "vg_name": "ceph_vg2"
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:        }
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]:    ]
Jan 21 08:48:45 np0005590528 interesting_bassi[101705]: }
Jan 21 08:48:45 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 21 08:48:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:45 np0005590528 systemd[1]: libpod-a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10.scope: Deactivated successfully.
Jan 21 08:48:45 np0005590528 podman[101688]: 2026-01-21 13:48:45.595982713 +0000 UTC m=+1.125866758 container died a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 08:48:45 np0005590528 systemd[1]: var-lib-containers-storage-overlay-263783f6275288d88e1bb81515829c5cf2d9b0fae56efa8e58fc36b870fd07fa-merged.mount: Deactivated successfully.
Jan 21 08:48:45 np0005590528 podman[101688]: 2026-01-21 13:48:45.826292253 +0000 UTC m=+1.356176298 container remove a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:48:45 np0005590528 systemd[1]: libpod-conmon-a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10.scope: Deactivated successfully.
Jan 21 08:48:46 np0005590528 podman[101788]: 2026-01-21 13:48:46.417842416 +0000 UTC m=+0.044023358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:48:46 np0005590528 podman[101788]: 2026-01-21 13:48:46.534536931 +0000 UTC m=+0.160717813 container create 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:48:46 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 21 08:48:46 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 21 08:48:46 np0005590528 systemd[1]: Started libpod-conmon-87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef.scope.
Jan 21 08:48:46 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:48:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:46 np0005590528 podman[101788]: 2026-01-21 13:48:46.695343836 +0000 UTC m=+0.321524768 container init 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:48:46 np0005590528 podman[101788]: 2026-01-21 13:48:46.702978123 +0000 UTC m=+0.329159015 container start 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 08:48:46 np0005590528 objective_mendel[101804]: 167 167
Jan 21 08:48:46 np0005590528 systemd[1]: libpod-87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef.scope: Deactivated successfully.
Jan 21 08:48:46 np0005590528 podman[101788]: 2026-01-21 13:48:46.725367232 +0000 UTC m=+0.351548144 container attach 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 08:48:46 np0005590528 podman[101788]: 2026-01-21 13:48:46.725848614 +0000 UTC m=+0.352029506 container died 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 08:48:46 np0005590528 systemd[1]: var-lib-containers-storage-overlay-098b712bd7fb031e63d6f63a5f20d60b2049065a1b65cae29541a8eaf615cdb0-merged.mount: Deactivated successfully.
Jan 21 08:48:46 np0005590528 podman[101788]: 2026-01-21 13:48:46.916430008 +0000 UTC m=+0.542610910 container remove 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:48:46 np0005590528 systemd[1]: libpod-conmon-87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef.scope: Deactivated successfully.
Jan 21 08:48:47 np0005590528 podman[101829]: 2026-01-21 13:48:47.175751677 +0000 UTC m=+0.094956824 container create 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 08:48:47 np0005590528 podman[101829]: 2026-01-21 13:48:47.123549859 +0000 UTC m=+0.042755056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:48:47 np0005590528 systemd[1]: Started libpod-conmon-88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2.scope.
Jan 21 08:48:47 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:48:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b1f857a68201e15fec42d571fc4e72aa509ed28ce75f7ecad4c9b6a50439f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b1f857a68201e15fec42d571fc4e72aa509ed28ce75f7ecad4c9b6a50439f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b1f857a68201e15fec42d571fc4e72aa509ed28ce75f7ecad4c9b6a50439f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b1f857a68201e15fec42d571fc4e72aa509ed28ce75f7ecad4c9b6a50439f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:48:47 np0005590528 podman[101829]: 2026-01-21 13:48:47.337135686 +0000 UTC m=+0.256340833 container init 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 08:48:47 np0005590528 podman[101829]: 2026-01-21 13:48:47.349028394 +0000 UTC m=+0.268233541 container start 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 21 08:48:47 np0005590528 podman[101829]: 2026-01-21 13:48:47.377439428 +0000 UTC m=+0.296644575 container attach 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 08:48:47 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 21 08:48:47 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 21 08:48:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:47 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 21 08:48:47 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 21 08:48:48 np0005590528 lvm[101926]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:48:48 np0005590528 lvm[101926]: VG ceph_vg1 finished
Jan 21 08:48:48 np0005590528 lvm[101924]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:48:48 np0005590528 lvm[101924]: VG ceph_vg0 finished
Jan 21 08:48:48 np0005590528 lvm[101927]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:48:48 np0005590528 lvm[101927]: VG ceph_vg2 finished
Jan 21 08:48:48 np0005590528 brave_ellis[101846]: {}
Jan 21 08:48:48 np0005590528 systemd[1]: libpod-88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2.scope: Deactivated successfully.
Jan 21 08:48:48 np0005590528 podman[101829]: 2026-01-21 13:48:48.183457542 +0000 UTC m=+1.102662689 container died 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 08:48:48 np0005590528 systemd[1]: libpod-88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2.scope: Consumed 1.310s CPU time.
Jan 21 08:48:48 np0005590528 systemd[1]: var-lib-containers-storage-overlay-16b1f857a68201e15fec42d571fc4e72aa509ed28ce75f7ecad4c9b6a50439f1-merged.mount: Deactivated successfully.
Jan 21 08:48:48 np0005590528 podman[101829]: 2026-01-21 13:48:48.335536521 +0000 UTC m=+1.254741668 container remove 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 21 08:48:48 np0005590528 systemd[1]: libpod-conmon-88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2.scope: Deactivated successfully.
Jan 21 08:48:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:48:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:48:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:48:48 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 21 08:48:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:48:48 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 21 08:48:48 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 21 08:48:48 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 21 08:48:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:48:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:48:49 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 21 08:48:49 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 21 08:48:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:48:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:48:50 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 21 08:48:50 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 21 08:48:51 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 21 08:48:51 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 21 08:48:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:52 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 21 08:48:52 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 21 08:48:52 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 21 08:48:52 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 21 08:48:52 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 21 08:48:52 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 21 08:48:53 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 21 08:48:53 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 21 08:48:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:54 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 21 08:48:54 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 21 08:48:54 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 21 08:48:54 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 21 08:48:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:56 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 21 08:48:56 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 21 08:48:56 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 21 08:48:56 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 21 08:48:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:48:57 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 21 08:48:57 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 21 08:48:57 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 21 08:48:57 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 21 08:48:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:48:58 np0005590528 python3.9[102121]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:48:58 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 21 08:48:58 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 21 08:48:59 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 21 08:48:59 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 21 08:48:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:00 np0005590528 python3.9[102408]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 21 08:49:01 np0005590528 python3.9[102560]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 21 08:49:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:01 np0005590528 python3.9[102712]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:49:02 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 21 08:49:02 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 21 08:49:02 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 21 08:49:02 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 21 08:49:02 np0005590528 python3.9[102864]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 21 08:49:03 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 21 08:49:03 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 21 08:49:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:04 np0005590528 python3.9[103016]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:49:04 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 21 08:49:04 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 21 08:49:04 np0005590528 python3.9[103168]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:49:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:05 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 21 08:49:05 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 21 08:49:05 np0005590528 python3.9[103246]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:49:06 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 21 08:49:06 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 21 08:49:06 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 21 08:49:06 np0005590528 python3.9[103398]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:49:06 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 21 08:49:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:07 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 21 08:49:07 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 21 08:49:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:07 np0005590528 python3.9[103552]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 21 08:49:08 np0005590528 python3.9[103705]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 21 08:49:09 np0005590528 python3.9[103858]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 08:49:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:10 np0005590528 python3.9[104010]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 21 08:49:10 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 21 08:49:10 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 21 08:49:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:49:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:49:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:49:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:49:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:49:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:49:11 np0005590528 python3.9[104162]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:49:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:12 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 21 08:49:12 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 21 08:49:13 np0005590528 python3.9[104315]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:49:13 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 21 08:49:13 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 21 08:49:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:13 np0005590528 python3.9[104467]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:49:14 np0005590528 python3.9[104545]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:49:14 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 21 08:49:14 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 21 08:49:15 np0005590528 python3.9[104697]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:49:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:15 np0005590528 python3.9[104775]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:49:16 np0005590528 python3.9[104927]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:49:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:17 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 21 08:49:17 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 21 08:49:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:17 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 21 08:49:17 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 21 08:49:18 np0005590528 python3.9[105078]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:49:19 np0005590528 python3.9[105230]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 21 08:49:19 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 21 08:49:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:19 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 21 08:49:20 np0005590528 python3.9[105380]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:49:20 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 21 08:49:20 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 21 08:49:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:21 np0005590528 python3.9[105532]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:49:21 np0005590528 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 21 08:49:21 np0005590528 systemd[1]: tuned.service: Deactivated successfully.
Jan 21 08:49:21 np0005590528 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 21 08:49:21 np0005590528 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 08:49:22 np0005590528 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 08:49:22 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 21 08:49:22 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 21 08:49:22 np0005590528 python3.9[105693]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 21 08:49:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:23 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 21 08:49:23 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 21 08:49:24 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 21 08:49:24 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 21 08:49:25 np0005590528 python3.9[105845]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:49:25 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 21 08:49:25 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 21 08:49:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:25 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 21 08:49:25 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 21 08:49:25 np0005590528 python3.9[105999]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:49:26 np0005590528 systemd[1]: session-35.scope: Deactivated successfully.
Jan 21 08:49:26 np0005590528 systemd[1]: session-35.scope: Consumed 1min 7.322s CPU time.
Jan 21 08:49:26 np0005590528 systemd-logind[780]: Session 35 logged out. Waiting for processes to exit.
Jan 21 08:49:26 np0005590528 systemd-logind[780]: Removed session 35.
Jan 21 08:49:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:26 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 21 08:49:26 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 21 08:49:27 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 21 08:49:27 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 21 08:49:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:28 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 21 08:49:28 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 21 08:49:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:30 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 21 08:49:30 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 21 08:49:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:31 np0005590528 systemd-logind[780]: New session 36 of user zuul.
Jan 21 08:49:31 np0005590528 systemd[1]: Started Session 36 of User zuul.
Jan 21 08:49:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:32 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 21 08:49:32 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 21 08:49:32 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 21 08:49:32 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 21 08:49:32 np0005590528 python3.9[106179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:49:33 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 21 08:49:33 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 21 08:49:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:34 np0005590528 python3.9[106335]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 21 08:49:34 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 21 08:49:34 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 21 08:49:35 np0005590528 python3.9[106488]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:49:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:36 np0005590528 python3.9[106572]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 08:49:36 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 21 08:49:36 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 21 08:49:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:37 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 21 08:49:37 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 21 08:49:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:38 np0005590528 python3.9[106725]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:49:39 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 21 08:49:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:49:39
Jan 21 08:49:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:49:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:49:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', '.mgr']
Jan 21 08:49:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:49:39 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 21 08:49:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:39 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 21 08:49:39 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 21 08:49:40 np0005590528 python3.9[106878]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 08:49:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:49:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:49:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:49:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:49:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:49:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:49:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:49:40 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:49:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:49:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:49:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:49:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:49:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:49:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:49:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:49:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:49:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:41 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 21 08:49:41 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 21 08:49:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:41 np0005590528 python3.9[107031]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:49:42 np0005590528 python3.9[107183]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 21 08:49:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:43 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 21 08:49:43 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 21 08:49:44 np0005590528 python3.9[107333]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:49:44 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 21 08:49:44 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 21 08:49:45 np0005590528 python3.9[107491]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:49:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:45 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 21 08:49:45 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 21 08:49:46 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 21 08:49:46 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 21 08:49:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:46 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 21 08:49:46 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 21 08:49:47 np0005590528 python3.9[107644]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:49:47 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 21 08:49:47 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 21 08:49:47 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 21 08:49:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:47 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 21 08:49:47 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 21 08:49:47 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 21 08:49:48 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 21 08:49:48 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 21 08:49:48 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 21 08:49:48 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 21 08:49:49 np0005590528 python3.9[107981]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:49:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:49 np0005590528 python3.9[108214]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:49:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:49:49 np0005590528 podman[108227]: 2026-01-21 13:49:49.812869083 +0000 UTC m=+0.079944777 container create e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:49:49 np0005590528 systemd[76413]: Created slice User Background Tasks Slice.
Jan 21 08:49:49 np0005590528 systemd[76413]: Starting Cleanup of User's Temporary Files and Directories...
Jan 21 08:49:49 np0005590528 systemd[1]: Started libpod-conmon-e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3.scope.
Jan 21 08:49:49 np0005590528 systemd[76413]: Finished Cleanup of User's Temporary Files and Directories.
Jan 21 08:49:49 np0005590528 podman[108227]: 2026-01-21 13:49:49.774205996 +0000 UTC m=+0.041281680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:49:49 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:49:49 np0005590528 podman[108227]: 2026-01-21 13:49:49.916625008 +0000 UTC m=+0.183700732 container init e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 08:49:49 np0005590528 podman[108227]: 2026-01-21 13:49:49.923596955 +0000 UTC m=+0.190672629 container start e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:49:49 np0005590528 podman[108227]: 2026-01-21 13:49:49.927521928 +0000 UTC m=+0.194597632 container attach e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 08:49:49 np0005590528 heuristic_goldwasser[108248]: 167 167
Jan 21 08:49:49 np0005590528 systemd[1]: libpod-e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3.scope: Deactivated successfully.
Jan 21 08:49:49 np0005590528 conmon[108248]: conmon e59be3fe22026dbe90e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3.scope/container/memory.events
Jan 21 08:49:49 np0005590528 podman[108227]: 2026-01-21 13:49:49.930766286 +0000 UTC m=+0.197841990 container died e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:49:49 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6d5b4334d94eb08db5cec0c2fc3a62f4add7b7eba07fbd49bd9c339d7a740a62-merged.mount: Deactivated successfully.
Jan 21 08:49:49 np0005590528 podman[108227]: 2026-01-21 13:49:49.972422123 +0000 UTC m=+0.239497787 container remove e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 08:49:49 np0005590528 systemd[1]: libpod-conmon-e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3.scope: Deactivated successfully.
Jan 21 08:49:50 np0005590528 podman[108347]: 2026-01-21 13:49:50.167121077 +0000 UTC m=+0.057415716 container create dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 08:49:50 np0005590528 systemd[1]: Started libpod-conmon-dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2.scope.
Jan 21 08:49:50 np0005590528 podman[108347]: 2026-01-21 13:49:50.142831345 +0000 UTC m=+0.033126064 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:49:50 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:49:50 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:50 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:50 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:50 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:50 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:50 np0005590528 podman[108347]: 2026-01-21 13:49:50.265398651 +0000 UTC m=+0.155693320 container init dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:49:50 np0005590528 podman[108347]: 2026-01-21 13:49:50.278688079 +0000 UTC m=+0.168982728 container start dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 08:49:50 np0005590528 podman[108347]: 2026-01-21 13:49:50.282589112 +0000 UTC m=+0.172883761 container attach dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:49:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:49:50 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 21 08:49:50 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 21 08:49:50 np0005590528 python3.9[108443]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:49:50 np0005590528 epic_hugle[108403]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:49:50 np0005590528 epic_hugle[108403]: --> All data devices are unavailable
Jan 21 08:49:50 np0005590528 systemd[1]: libpod-dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2.scope: Deactivated successfully.
Jan 21 08:49:50 np0005590528 podman[108347]: 2026-01-21 13:49:50.809481022 +0000 UTC m=+0.699775731 container died dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:49:50 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0-merged.mount: Deactivated successfully.
Jan 21 08:49:50 np0005590528 podman[108347]: 2026-01-21 13:49:50.870961724 +0000 UTC m=+0.761256393 container remove dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:49:50 np0005590528 systemd[1]: libpod-conmon-dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2.scope: Deactivated successfully.
Jan 21 08:49:51 np0005590528 podman[108536]: 2026-01-21 13:49:51.41979837 +0000 UTC m=+0.062401965 container create 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 08:49:51 np0005590528 systemd[1]: Started libpod-conmon-9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6.scope.
Jan 21 08:49:51 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:49:51 np0005590528 podman[108536]: 2026-01-21 13:49:51.39310202 +0000 UTC m=+0.035705615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:49:51 np0005590528 podman[108536]: 2026-01-21 13:49:51.495484533 +0000 UTC m=+0.138088188 container init 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 08:49:51 np0005590528 podman[108536]: 2026-01-21 13:49:51.507776327 +0000 UTC m=+0.150379892 container start 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 08:49:51 np0005590528 focused_kilby[108552]: 167 167
Jan 21 08:49:51 np0005590528 podman[108536]: 2026-01-21 13:49:51.511679031 +0000 UTC m=+0.154282696 container attach 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 08:49:51 np0005590528 systemd[1]: libpod-9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6.scope: Deactivated successfully.
Jan 21 08:49:51 np0005590528 podman[108536]: 2026-01-21 13:49:51.513962475 +0000 UTC m=+0.156566070 container died 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 08:49:51 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1850a9cb523f8bbe592b2dcf648719ee742657a7a681b4f6c1ec70e4eae78315-merged.mount: Deactivated successfully.
Jan 21 08:49:51 np0005590528 podman[108536]: 2026-01-21 13:49:51.566855612 +0000 UTC m=+0.209459177 container remove 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:49:51 np0005590528 systemd[1]: libpod-conmon-9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6.scope: Deactivated successfully.
Jan 21 08:49:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:51 np0005590528 podman[108576]: 2026-01-21 13:49:51.753696258 +0000 UTC m=+0.057293674 container create 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 08:49:51 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 21 08:49:51 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 21 08:49:51 np0005590528 systemd[1]: Started libpod-conmon-9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6.scope.
Jan 21 08:49:51 np0005590528 podman[108576]: 2026-01-21 13:49:51.724260543 +0000 UTC m=+0.027858009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:49:51 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:49:51 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba31297f7b70d77546bf506a5ed7ebba311ad73553f8cf5e22928afda4b9c84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:51 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba31297f7b70d77546bf506a5ed7ebba311ad73553f8cf5e22928afda4b9c84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:51 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba31297f7b70d77546bf506a5ed7ebba311ad73553f8cf5e22928afda4b9c84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:51 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba31297f7b70d77546bf506a5ed7ebba311ad73553f8cf5e22928afda4b9c84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:51 np0005590528 podman[108576]: 2026-01-21 13:49:51.867030662 +0000 UTC m=+0.170628078 container init 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 08:49:51 np0005590528 podman[108576]: 2026-01-21 13:49:51.873900187 +0000 UTC m=+0.177497563 container start 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 08:49:51 np0005590528 podman[108576]: 2026-01-21 13:49:51.887743857 +0000 UTC m=+0.191341243 container attach 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]: {
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:    "0": [
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:        {
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "devices": [
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "/dev/loop3"
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            ],
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_name": "ceph_lv0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_size": "21470642176",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "name": "ceph_lv0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "tags": {
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.cluster_name": "ceph",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.crush_device_class": "",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.encrypted": "0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.objectstore": "bluestore",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.osd_id": "0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.type": "block",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.vdo": "0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.with_tpm": "0"
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            },
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "type": "block",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "vg_name": "ceph_vg0"
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:        }
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:    ],
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:    "1": [
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:        {
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "devices": [
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "/dev/loop4"
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            ],
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_name": "ceph_lv1",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_size": "21470642176",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "name": "ceph_lv1",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "tags": {
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.cluster_name": "ceph",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.crush_device_class": "",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.encrypted": "0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.objectstore": "bluestore",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.osd_id": "1",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.type": "block",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.vdo": "0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.with_tpm": "0"
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            },
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "type": "block",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "vg_name": "ceph_vg1"
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:        }
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:    ],
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:    "2": [
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:        {
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "devices": [
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "/dev/loop5"
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            ],
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_name": "ceph_lv2",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_size": "21470642176",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "name": "ceph_lv2",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "tags": {
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.cluster_name": "ceph",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.crush_device_class": "",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.encrypted": "0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.objectstore": "bluestore",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.osd_id": "2",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.type": "block",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.vdo": "0",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:                "ceph.with_tpm": "0"
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            },
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "type": "block",
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:            "vg_name": "ceph_vg2"
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:        }
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]:    ]
Jan 21 08:49:52 np0005590528 hardcore_merkle[108592]: }
Jan 21 08:49:52 np0005590528 systemd[1]: libpod-9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6.scope: Deactivated successfully.
Jan 21 08:49:52 np0005590528 podman[108576]: 2026-01-21 13:49:52.309882189 +0000 UTC m=+0.613479635 container died 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 08:49:52 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9ba31297f7b70d77546bf506a5ed7ebba311ad73553f8cf5e22928afda4b9c84-merged.mount: Deactivated successfully.
Jan 21 08:49:52 np0005590528 podman[108576]: 2026-01-21 13:49:52.360502171 +0000 UTC m=+0.664099547 container remove 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:49:52 np0005590528 systemd[1]: libpod-conmon-9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6.scope: Deactivated successfully.
Jan 21 08:49:52 np0005590528 python3.9[108763]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:49:52 np0005590528 podman[108827]: 2026-01-21 13:49:52.780808628 +0000 UTC m=+0.040759218 container create d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:49:52 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 21 08:49:52 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 21 08:49:52 np0005590528 systemd[1]: Started libpod-conmon-d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339.scope.
Jan 21 08:49:52 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:49:52 np0005590528 podman[108827]: 2026-01-21 13:49:52.858901138 +0000 UTC m=+0.118851788 container init d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 08:49:52 np0005590528 podman[108827]: 2026-01-21 13:49:52.76379119 +0000 UTC m=+0.023741780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:49:52 np0005590528 podman[108827]: 2026-01-21 13:49:52.866013378 +0000 UTC m=+0.125963948 container start d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:49:52 np0005590528 podman[108827]: 2026-01-21 13:49:52.869584624 +0000 UTC m=+0.129535274 container attach d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:49:52 np0005590528 focused_brown[108844]: 167 167
Jan 21 08:49:52 np0005590528 systemd[1]: libpod-d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339.scope: Deactivated successfully.
Jan 21 08:49:52 np0005590528 podman[108827]: 2026-01-21 13:49:52.871547491 +0000 UTC m=+0.131498051 container died d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:49:52 np0005590528 systemd[1]: var-lib-containers-storage-overlay-089a9b083ac005eb86a6541851716f35bb5d6a9a44ebbe86350ef8a493a6b906-merged.mount: Deactivated successfully.
Jan 21 08:49:52 np0005590528 podman[108827]: 2026-01-21 13:49:52.908784703 +0000 UTC m=+0.168735263 container remove d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 08:49:52 np0005590528 systemd[1]: libpod-conmon-d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339.scope: Deactivated successfully.
Jan 21 08:49:53 np0005590528 podman[108867]: 2026-01-21 13:49:53.067137276 +0000 UTC m=+0.042010297 container create 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 21 08:49:53 np0005590528 systemd[1]: Started libpod-conmon-1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9.scope.
Jan 21 08:49:53 np0005590528 podman[108867]: 2026-01-21 13:49:53.047660119 +0000 UTC m=+0.022533160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:49:53 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:49:53 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f8867a0d3370718a180db39a058eadcc250b351f33e182b11caad456fd6de5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:53 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f8867a0d3370718a180db39a058eadcc250b351f33e182b11caad456fd6de5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:53 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f8867a0d3370718a180db39a058eadcc250b351f33e182b11caad456fd6de5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:53 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f8867a0d3370718a180db39a058eadcc250b351f33e182b11caad456fd6de5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:49:53 np0005590528 podman[108867]: 2026-01-21 13:49:53.167746936 +0000 UTC m=+0.142619977 container init 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:49:53 np0005590528 podman[108867]: 2026-01-21 13:49:53.176956506 +0000 UTC m=+0.151829517 container start 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:49:53 np0005590528 podman[108867]: 2026-01-21 13:49:53.180654105 +0000 UTC m=+0.155527186 container attach 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:49:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:53 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 21 08:49:53 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 21 08:49:53 np0005590528 lvm[108963]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:49:53 np0005590528 lvm[108963]: VG ceph_vg0 finished
Jan 21 08:49:53 np0005590528 lvm[108964]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:49:53 np0005590528 lvm[108964]: VG ceph_vg1 finished
Jan 21 08:49:53 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 21 08:49:53 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 21 08:49:53 np0005590528 lvm[108966]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:49:53 np0005590528 lvm[108966]: VG ceph_vg2 finished
Jan 21 08:49:53 np0005590528 loving_cerf[108884]: {}
Jan 21 08:49:54 np0005590528 systemd[1]: libpod-1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9.scope: Deactivated successfully.
Jan 21 08:49:54 np0005590528 systemd[1]: libpod-1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9.scope: Consumed 1.256s CPU time.
Jan 21 08:49:54 np0005590528 podman[108867]: 2026-01-21 13:49:54.00378777 +0000 UTC m=+0.978660811 container died 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:49:54 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c3f8867a0d3370718a180db39a058eadcc250b351f33e182b11caad456fd6de5-merged.mount: Deactivated successfully.
Jan 21 08:49:54 np0005590528 podman[108867]: 2026-01-21 13:49:54.056331788 +0000 UTC m=+1.031204809 container remove 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:49:54 np0005590528 systemd[1]: libpod-conmon-1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9.scope: Deactivated successfully.
Jan 21 08:49:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:49:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:49:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:49:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:49:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:49:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:49:54 np0005590528 python3.9[109157]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:49:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:55 np0005590528 python3.9[109311]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 21 08:49:56 np0005590528 systemd[1]: session-36.scope: Deactivated successfully.
Jan 21 08:49:56 np0005590528 systemd[1]: session-36.scope: Consumed 19.262s CPU time.
Jan 21 08:49:56 np0005590528 systemd-logind[780]: Session 36 logged out. Waiting for processes to exit.
Jan 21 08:49:56 np0005590528 systemd-logind[780]: Removed session 36.
Jan 21 08:49:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:49:56 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 21 08:49:56 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 21 08:49:57 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 21 08:49:57 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 21 08:49:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:49:58 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 21 08:49:58 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 21 08:49:59 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 21 08:49:59 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 21 08:49:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:00 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 21 08:50:00 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 21 08:50:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:01 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 21 08:50:01 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 21 08:50:01 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 21 08:50:01 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 21 08:50:01 np0005590528 systemd-logind[780]: New session 37 of user zuul.
Jan 21 08:50:01 np0005590528 systemd[1]: Started Session 37 of User zuul.
Jan 21 08:50:02 np0005590528 python3.9[109490]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:50:03 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 21 08:50:03 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 21 08:50:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:03 np0005590528 python3.9[109644]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:50:05 np0005590528 python3.9[109837]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:50:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:05 np0005590528 systemd[1]: session-37.scope: Deactivated successfully.
Jan 21 08:50:05 np0005590528 systemd[1]: session-37.scope: Consumed 2.785s CPU time.
Jan 21 08:50:05 np0005590528 systemd-logind[780]: Session 37 logged out. Waiting for processes to exit.
Jan 21 08:50:05 np0005590528 systemd-logind[780]: Removed session 37.
Jan 21 08:50:05 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 21 08:50:05 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 21 08:50:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:06 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 21 08:50:06 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 21 08:50:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:07 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 21 08:50:07 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 21 08:50:08 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 21 08:50:08 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 21 08:50:09 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 21 08:50:09 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 21 08:50:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:09 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 21 08:50:09 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 21 08:50:10 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 21 08:50:10 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 21 08:50:10 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 21 08:50:10 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 21 08:50:10 np0005590528 systemd-logind[780]: New session 38 of user zuul.
Jan 21 08:50:10 np0005590528 systemd[1]: Started Session 38 of User zuul.
Jan 21 08:50:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:50:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:50:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:50:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:50:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:50:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:50:11 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 21 08:50:11 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 21 08:50:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:11 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 21 08:50:11 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 21 08:50:11 np0005590528 python3.9[110016]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:50:12 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 21 08:50:12 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 21 08:50:12 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 21 08:50:12 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 21 08:50:12 np0005590528 python3.9[110170]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:50:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:14 np0005590528 python3.9[110326]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:50:14 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 21 08:50:14 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 21 08:50:14 np0005590528 python3.9[110410]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:50:15 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 21 08:50:15 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 21 08:50:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:16 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 21 08:50:16 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 21 08:50:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:16 np0005590528 python3.9[110563]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:50:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:17 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 21 08:50:17 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 21 08:50:18 np0005590528 python3.9[110758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:50:18 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 21 08:50:18 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 21 08:50:18 np0005590528 python3.9[110910]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:50:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:19 np0005590528 python3.9[111075]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:50:19 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 21 08:50:19 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 21 08:50:20 np0005590528 python3.9[111153]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:50:20 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 21 08:50:20 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 21 08:50:20 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 21 08:50:20 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 21 08:50:20 np0005590528 python3.9[111305]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:50:21 np0005590528 python3.9[111383]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:50:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:21 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 21 08:50:21 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 21 08:50:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:21 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 21 08:50:21 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 21 08:50:22 np0005590528 python3.9[111535]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:50:22 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 21 08:50:22 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 21 08:50:22 np0005590528 python3.9[111687]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:50:22 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 21 08:50:22 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 21 08:50:23 np0005590528 python3.9[111839]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:50:23 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 21 08:50:23 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 21 08:50:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:23 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 21 08:50:23 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 21 08:50:24 np0005590528 python3.9[111991]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:50:24 np0005590528 python3.9[112143]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:50:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:25 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 21 08:50:25 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 21 08:50:26 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 21 08:50:26 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 21 08:50:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:27 np0005590528 python3.9[112296]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:50:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:27 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 21 08:50:27 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 21 08:50:27 np0005590528 python3.9[112450]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:50:28 np0005590528 python3.9[112602]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:50:29 np0005590528 python3.9[112754]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:50:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:30 np0005590528 python3.9[112907]: ansible-service_facts Invoked
Jan 21 08:50:30 np0005590528 network[112924]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 08:50:30 np0005590528 network[112925]: 'network-scripts' will be removed from distribution in near future.
Jan 21 08:50:30 np0005590528 network[112926]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 08:50:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:31 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 21 08:50:31 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 21 08:50:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:34 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 21 08:50:34 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 21 08:50:35 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 21 08:50:35 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 21 08:50:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:36 np0005590528 python3.9[113378]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:50:36 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 21 08:50:36 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 21 08:50:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:36 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 21 08:50:36 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 21 08:50:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:37 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 21 08:50:37 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 21 08:50:38 np0005590528 python3.9[113531]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 21 08:50:38 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 21 08:50:38 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 21 08:50:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:50:39
Jan 21 08:50:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:50:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:50:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', 'vms', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 21 08:50:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:50:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:39 np0005590528 python3.9[113683]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:50:40 np0005590528 python3.9[113761]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:50:40 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 21 08:50:40 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 21 08:50:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:50:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:50:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:50:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:50:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:50:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:50:41 np0005590528 python3.9[113913]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:50:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:41 np0005590528 python3.9[113991]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:50:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:41 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 21 08:50:41 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 21 08:50:42 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 21 08:50:42 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 21 08:50:42 np0005590528 python3.9[114143]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:50:43 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 21 08:50:43 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 21 08:50:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:43 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 21 08:50:43 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 21 08:50:43 np0005590528 python3.9[114295]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:50:44 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 21 08:50:44 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 21 08:50:45 np0005590528 python3.9[114379]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:50:45 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 21 08:50:45 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 21 08:50:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:45 np0005590528 systemd-logind[780]: Session 38 logged out. Waiting for processes to exit.
Jan 21 08:50:45 np0005590528 systemd[1]: session-38.scope: Deactivated successfully.
Jan 21 08:50:45 np0005590528 systemd[1]: session-38.scope: Consumed 25.934s CPU time.
Jan 21 08:50:45 np0005590528 systemd-logind[780]: Removed session 38.
Jan 21 08:50:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:47 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 21 08:50:47 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 21 08:50:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:47 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 21 08:50:47 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 21 08:50:49 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 21 08:50:49 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 21 08:50:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:50 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 21 08:50:50 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:50:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:50:50 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 21 08:50:50 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 21 08:50:51 np0005590528 systemd-logind[780]: New session 39 of user zuul.
Jan 21 08:50:51 np0005590528 systemd[1]: Started Session 39 of User zuul.
Jan 21 08:50:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:51 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 21 08:50:51 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 21 08:50:52 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 21 08:50:52 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 21 08:50:52 np0005590528 python3.9[114561]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.783512) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003452783681, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7144, "num_deletes": 252, "total_data_size": 9893397, "memory_usage": 10080128, "flush_reason": "Manual Compaction"}
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003452860230, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7762988, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7287, "table_properties": {"data_size": 7736694, "index_size": 17219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 73382, "raw_average_key_size": 23, "raw_value_size": 7675304, "raw_average_value_size": 2413, "num_data_blocks": 757, "num_entries": 3180, "num_filter_entries": 3180, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003061, "oldest_key_time": 1769003061, "file_creation_time": 1769003452, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 76806 microseconds, and 27391 cpu microseconds.
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.860317) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7762988 bytes OK
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.860351) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.862064) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.862092) EVENT_LOG_v1 {"time_micros": 1769003452862084, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.862129) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9862517, prev total WAL file size 9862517, number of live WAL files 2.
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:50:52 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.866004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7581KB) 13(58KB) 8(1944B)]
Jan 21 08:50:52 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003452866729, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7824892, "oldest_snapshot_seqno": -1}
Jan 21 08:50:52 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3005 keys, 7777743 bytes, temperature: kUnknown
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003453103928, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7777743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7751863, "index_size": 17258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7557, "raw_key_size": 71785, "raw_average_key_size": 23, "raw_value_size": 7691824, "raw_average_value_size": 2559, "num_data_blocks": 760, "num_entries": 3005, "num_filter_entries": 3005, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769003452, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:53.104197) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7777743 bytes
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:53.106973) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.0 rd, 32.8 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3295, records dropped: 290 output_compression: NoCompression
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:53.106995) EVENT_LOG_v1 {"time_micros": 1769003453106984, "job": 4, "event": "compaction_finished", "compaction_time_micros": 237019, "compaction_time_cpu_micros": 31843, "output_level": 6, "num_output_files": 1, "total_output_size": 7777743, "num_input_records": 3295, "num_output_records": 3005, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003453108948, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003453109076, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003453109203, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 21 08:50:53 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.865797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:50:53 np0005590528 python3.9[114714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:50:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:53 np0005590528 python3.9[114792]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:50:54 np0005590528 systemd[1]: session-39.scope: Deactivated successfully.
Jan 21 08:50:54 np0005590528 systemd[1]: session-39.scope: Consumed 1.693s CPU time.
Jan 21 08:50:54 np0005590528 systemd-logind[780]: Session 39 logged out. Waiting for processes to exit.
Jan 21 08:50:54 np0005590528 systemd-logind[780]: Removed session 39.
Jan 21 08:50:54 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 21 08:50:54 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:50:55 np0005590528 podman[114961]: 2026-01-21 13:50:55.527033446 +0000 UTC m=+0.049812150 container create 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:50:55 np0005590528 systemd[1]: Started libpod-conmon-9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d.scope.
Jan 21 08:50:55 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:50:55 np0005590528 podman[114961]: 2026-01-21 13:50:55.504173495 +0000 UTC m=+0.026952219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:50:55 np0005590528 podman[114961]: 2026-01-21 13:50:55.619537761 +0000 UTC m=+0.142316535 container init 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:50:55 np0005590528 podman[114961]: 2026-01-21 13:50:55.627678076 +0000 UTC m=+0.150456760 container start 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 08:50:55 np0005590528 podman[114961]: 2026-01-21 13:50:55.631630251 +0000 UTC m=+0.154408945 container attach 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 08:50:55 np0005590528 gifted_montalcini[114977]: 167 167
Jan 21 08:50:55 np0005590528 systemd[1]: libpod-9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d.scope: Deactivated successfully.
Jan 21 08:50:55 np0005590528 conmon[114977]: conmon 9ca2e5206b555162dade <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d.scope/container/memory.events
Jan 21 08:50:55 np0005590528 podman[114961]: 2026-01-21 13:50:55.637832241 +0000 UTC m=+0.160610925 container died 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 08:50:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:55 np0005590528 systemd[1]: var-lib-containers-storage-overlay-308508bd4e8dd9fd84e0b3aa2b6b65f94655a8cd89ecc0ab28bc6f59f66623f8-merged.mount: Deactivated successfully.
Jan 21 08:50:55 np0005590528 podman[114961]: 2026-01-21 13:50:55.677015443 +0000 UTC m=+0.199794117 container remove 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:50:55 np0005590528 systemd[1]: libpod-conmon-9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d.scope: Deactivated successfully.
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:50:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:50:55 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 21 08:50:55 np0005590528 podman[115000]: 2026-01-21 13:50:55.873057458 +0000 UTC m=+0.067677279 container create c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 08:50:55 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 21 08:50:55 np0005590528 systemd[1]: Started libpod-conmon-c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2.scope.
Jan 21 08:50:55 np0005590528 podman[115000]: 2026-01-21 13:50:55.84697278 +0000 UTC m=+0.041592641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:50:55 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:50:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:55 np0005590528 podman[115000]: 2026-01-21 13:50:55.982023708 +0000 UTC m=+0.176643539 container init c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:50:55 np0005590528 podman[115000]: 2026-01-21 13:50:55.997731577 +0000 UTC m=+0.192351398 container start c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 08:50:56 np0005590528 podman[115000]: 2026-01-21 13:50:56.002355918 +0000 UTC m=+0.196975789 container attach c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:50:56 np0005590528 blissful_mayer[115017]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:50:56 np0005590528 blissful_mayer[115017]: --> All data devices are unavailable
Jan 21 08:50:56 np0005590528 systemd[1]: libpod-c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2.scope: Deactivated successfully.
Jan 21 08:50:56 np0005590528 podman[115000]: 2026-01-21 13:50:56.584205112 +0000 UTC m=+0.778824933 container died c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 08:50:56 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d-merged.mount: Deactivated successfully.
Jan 21 08:50:56 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 21 08:50:56 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 21 08:50:56 np0005590528 podman[115000]: 2026-01-21 13:50:56.651337077 +0000 UTC m=+0.845956868 container remove c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:50:56 np0005590528 systemd[1]: libpod-conmon-c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2.scope: Deactivated successfully.
Jan 21 08:50:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:50:56 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 21 08:50:56 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 21 08:50:57 np0005590528 podman[115113]: 2026-01-21 13:50:57.218257941 +0000 UTC m=+0.053605870 container create df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 08:50:57 np0005590528 systemd[1]: Started libpod-conmon-df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81.scope.
Jan 21 08:50:57 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:50:57 np0005590528 podman[115113]: 2026-01-21 13:50:57.191866767 +0000 UTC m=+0.027214786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:50:57 np0005590528 podman[115113]: 2026-01-21 13:50:57.301698838 +0000 UTC m=+0.137046787 container init df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 08:50:57 np0005590528 podman[115113]: 2026-01-21 13:50:57.30718934 +0000 UTC m=+0.142537299 container start df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:50:57 np0005590528 podman[115113]: 2026-01-21 13:50:57.312104468 +0000 UTC m=+0.147452427 container attach df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 08:50:57 np0005590528 zen_ride[115130]: 167 167
Jan 21 08:50:57 np0005590528 systemd[1]: libpod-df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81.scope: Deactivated successfully.
Jan 21 08:50:57 np0005590528 podman[115113]: 2026-01-21 13:50:57.315160953 +0000 UTC m=+0.150508882 container died df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 08:50:57 np0005590528 systemd[1]: var-lib-containers-storage-overlay-14639df35d5313158c2ad933d4bd235a31c34b33962ce03cccf7f9c1e7d648ea-merged.mount: Deactivated successfully.
Jan 21 08:50:57 np0005590528 podman[115113]: 2026-01-21 13:50:57.367854129 +0000 UTC m=+0.203202098 container remove df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 08:50:57 np0005590528 systemd[1]: libpod-conmon-df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81.scope: Deactivated successfully.
Jan 21 08:50:57 np0005590528 podman[115154]: 2026-01-21 13:50:57.529535088 +0000 UTC m=+0.049328458 container create 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:50:57 np0005590528 systemd[1]: Started libpod-conmon-21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8.scope.
Jan 21 08:50:57 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 21 08:50:57 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 21 08:50:57 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:50:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6e298a7c1820f226852f941cb6c7e29e1bab44ea953b342fea579ced02387f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6e298a7c1820f226852f941cb6c7e29e1bab44ea953b342fea579ced02387f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6e298a7c1820f226852f941cb6c7e29e1bab44ea953b342fea579ced02387f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6e298a7c1820f226852f941cb6c7e29e1bab44ea953b342fea579ced02387f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:57 np0005590528 podman[115154]: 2026-01-21 13:50:57.509325992 +0000 UTC m=+0.029119392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:50:57 np0005590528 podman[115154]: 2026-01-21 13:50:57.608337673 +0000 UTC m=+0.128131043 container init 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:50:57 np0005590528 podman[115154]: 2026-01-21 13:50:57.614540362 +0000 UTC m=+0.134333732 container start 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:50:57 np0005590528 podman[115154]: 2026-01-21 13:50:57.618014216 +0000 UTC m=+0.137807636 container attach 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 08:50:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]: {
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:    "0": [
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:        {
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "devices": [
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "/dev/loop3"
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            ],
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_name": "ceph_lv0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_size": "21470642176",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "name": "ceph_lv0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "tags": {
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.cluster_name": "ceph",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.crush_device_class": "",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.encrypted": "0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.objectstore": "bluestore",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.osd_id": "0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.type": "block",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.vdo": "0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.with_tpm": "0"
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            },
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "type": "block",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "vg_name": "ceph_vg0"
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:        }
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:    ],
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:    "1": [
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:        {
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "devices": [
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "/dev/loop4"
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            ],
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_name": "ceph_lv1",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_size": "21470642176",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "name": "ceph_lv1",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "tags": {
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.cluster_name": "ceph",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.crush_device_class": "",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.encrypted": "0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.objectstore": "bluestore",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.osd_id": "1",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.type": "block",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.vdo": "0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.with_tpm": "0"
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            },
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "type": "block",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "vg_name": "ceph_vg1"
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:        }
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:    ],
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:    "2": [
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:        {
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "devices": [
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "/dev/loop5"
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            ],
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_name": "ceph_lv2",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_size": "21470642176",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "name": "ceph_lv2",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "tags": {
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.cluster_name": "ceph",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.crush_device_class": "",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.encrypted": "0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.objectstore": "bluestore",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.osd_id": "2",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.type": "block",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.vdo": "0",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:                "ceph.with_tpm": "0"
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            },
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "type": "block",
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:            "vg_name": "ceph_vg2"
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:        }
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]:    ]
Jan 21 08:50:57 np0005590528 thirsty_tharp[115171]: }
Jan 21 08:50:57 np0005590528 systemd[1]: libpod-21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8.scope: Deactivated successfully.
Jan 21 08:50:57 np0005590528 podman[115154]: 2026-01-21 13:50:57.928141745 +0000 UTC m=+0.447935115 container died 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 08:50:57 np0005590528 systemd[1]: var-lib-containers-storage-overlay-da6e298a7c1820f226852f941cb6c7e29e1bab44ea953b342fea579ced02387f-merged.mount: Deactivated successfully.
Jan 21 08:50:57 np0005590528 podman[115154]: 2026-01-21 13:50:57.96909322 +0000 UTC m=+0.488886590 container remove 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:50:57 np0005590528 systemd[1]: libpod-conmon-21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8.scope: Deactivated successfully.
Jan 21 08:50:58 np0005590528 podman[115256]: 2026-01-21 13:50:58.474879865 +0000 UTC m=+0.055134877 container create 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:50:58 np0005590528 systemd[1]: Started libpod-conmon-76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43.scope.
Jan 21 08:50:58 np0005590528 podman[115256]: 2026-01-21 13:50:58.446893572 +0000 UTC m=+0.027148664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:50:58 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:50:58 np0005590528 podman[115256]: 2026-01-21 13:50:58.554443779 +0000 UTC m=+0.134698791 container init 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 21 08:50:58 np0005590528 podman[115256]: 2026-01-21 13:50:58.561238652 +0000 UTC m=+0.141493654 container start 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 08:50:58 np0005590528 podman[115256]: 2026-01-21 13:50:58.565044263 +0000 UTC m=+0.145299295 container attach 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 08:50:58 np0005590528 fervent_cohen[115273]: 167 167
Jan 21 08:50:58 np0005590528 systemd[1]: libpod-76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43.scope: Deactivated successfully.
Jan 21 08:50:58 np0005590528 podman[115256]: 2026-01-21 13:50:58.570214208 +0000 UTC m=+0.150469250 container died 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 08:50:58 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9443f10b03f70a372327ac87c3d0357dbfad873d85986a9fce52b9ee9dc6e01e-merged.mount: Deactivated successfully.
Jan 21 08:50:58 np0005590528 podman[115256]: 2026-01-21 13:50:58.619999616 +0000 UTC m=+0.200254648 container remove 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:50:58 np0005590528 systemd[1]: libpod-conmon-76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43.scope: Deactivated successfully.
Jan 21 08:50:58 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 21 08:50:58 np0005590528 podman[115297]: 2026-01-21 13:50:58.838268664 +0000 UTC m=+0.057477143 container create 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:50:58 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 21 08:50:58 np0005590528 systemd[1]: Started libpod-conmon-139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef.scope.
Jan 21 08:50:58 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:50:58 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b1ce0906736a284b8b473fb1989e1bbb4d7118c1e8cc0380406568947631ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:58 np0005590528 podman[115297]: 2026-01-21 13:50:58.823018498 +0000 UTC m=+0.042227007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:50:58 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b1ce0906736a284b8b473fb1989e1bbb4d7118c1e8cc0380406568947631ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:58 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b1ce0906736a284b8b473fb1989e1bbb4d7118c1e8cc0380406568947631ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:58 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b1ce0906736a284b8b473fb1989e1bbb4d7118c1e8cc0380406568947631ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:50:58 np0005590528 podman[115297]: 2026-01-21 13:50:58.939228923 +0000 UTC m=+0.158437442 container init 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:50:58 np0005590528 podman[115297]: 2026-01-21 13:50:58.948877964 +0000 UTC m=+0.168086473 container start 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 08:50:58 np0005590528 podman[115297]: 2026-01-21 13:50:58.953481406 +0000 UTC m=+0.172689925 container attach 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:50:59 np0005590528 systemd-logind[780]: New session 40 of user zuul.
Jan 21 08:50:59 np0005590528 systemd[1]: Started Session 40 of User zuul.
Jan 21 08:50:59 np0005590528 lvm[115416]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:50:59 np0005590528 lvm[115416]: VG ceph_vg1 finished
Jan 21 08:50:59 np0005590528 lvm[115412]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:50:59 np0005590528 lvm[115412]: VG ceph_vg0 finished
Jan 21 08:50:59 np0005590528 lvm[115421]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:50:59 np0005590528 lvm[115421]: VG ceph_vg2 finished
Jan 21 08:50:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:50:59 np0005590528 lvm[115444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:50:59 np0005590528 lvm[115444]: VG ceph_vg2 finished
Jan 21 08:50:59 np0005590528 hopeful_williamson[115313]: {}
Jan 21 08:50:59 np0005590528 systemd[1]: libpod-139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef.scope: Deactivated successfully.
Jan 21 08:50:59 np0005590528 systemd[1]: libpod-139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef.scope: Consumed 1.261s CPU time.
Jan 21 08:50:59 np0005590528 podman[115297]: 2026-01-21 13:50:59.768394325 +0000 UTC m=+0.987602844 container died 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 08:50:59 np0005590528 systemd[1]: var-lib-containers-storage-overlay-99b1ce0906736a284b8b473fb1989e1bbb4d7118c1e8cc0380406568947631ad-merged.mount: Deactivated successfully.
Jan 21 08:50:59 np0005590528 podman[115297]: 2026-01-21 13:50:59.826443881 +0000 UTC m=+1.045652380 container remove 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 08:50:59 np0005590528 systemd[1]: libpod-conmon-139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef.scope: Deactivated successfully.
Jan 21 08:50:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:50:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:50:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:50:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:51:00 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 21 08:51:00 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 21 08:51:00 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:51:00 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:51:00 np0005590528 python3.9[115588]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:51:00 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 21 08:51:00 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 21 08:51:01 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 21 08:51:01 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 21 08:51:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:01 np0005590528 python3.9[115744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:02 np0005590528 python3.9[115919]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:03 np0005590528 python3.9[115997]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.e63auofc recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:03 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 21 08:51:03 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 21 08:51:04 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 21 08:51:04 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 21 08:51:04 np0005590528 python3.9[116149]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:04 np0005590528 python3.9[116227]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.oajyrk3w recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:05 np0005590528 python3.9[116379]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:51:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:06 np0005590528 python3.9[116531]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:06 np0005590528 python3.9[116609]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:51:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:07 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 21 08:51:07 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 21 08:51:07 np0005590528 python3.9[116761]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:07 np0005590528 python3.9[116839]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:51:08 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 21 08:51:08 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 21 08:51:08 np0005590528 python3.9[116991]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:09 np0005590528 python3.9[117143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:09 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 21 08:51:09 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 21 08:51:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:09 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 21 08:51:09 np0005590528 python3.9[117221]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:09 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 21 08:51:10 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 21 08:51:10 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 21 08:51:10 np0005590528 python3.9[117373]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:10 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 21 08:51:10 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 21 08:51:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:51:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:51:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:51:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:51:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:51:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:51:11 np0005590528 python3.9[117451]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:11 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 21 08:51:11 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 21 08:51:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:12 np0005590528 python3.9[117603]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:51:12 np0005590528 systemd[1]: Reloading.
Jan 21 08:51:12 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:51:12 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:51:13 np0005590528 python3.9[117792]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:13 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 21 08:51:13 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 21 08:51:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:13 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 21 08:51:13 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 21 08:51:13 np0005590528 python3.9[117870]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:14 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 21 08:51:14 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 21 08:51:14 np0005590528 python3.9[118022]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:15 np0005590528 python3.9[118100]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:16 np0005590528 python3.9[118252]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:51:16 np0005590528 systemd[1]: Reloading.
Jan 21 08:51:16 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:51:16 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:51:16 np0005590528 systemd[1]: Starting Create netns directory...
Jan 21 08:51:16 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 21 08:51:16 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 21 08:51:16 np0005590528 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 08:51:16 np0005590528 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 08:51:16 np0005590528 systemd[1]: Finished Create netns directory.
Jan 21 08:51:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:17 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 21 08:51:17 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 21 08:51:17 np0005590528 python3.9[118443]: ansible-ansible.builtin.service_facts Invoked
Jan 21 08:51:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:17 np0005590528 network[118460]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 08:51:17 np0005590528 network[118461]: 'network-scripts' will be removed from distribution in near future.
Jan 21 08:51:17 np0005590528 network[118462]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 08:51:17 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 21 08:51:17 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 21 08:51:18 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 21 08:51:18 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 21 08:51:19 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 21 08:51:19 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 21 08:51:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:20 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 21 08:51:20 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 21 08:51:21 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 21 08:51:21 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 21 08:51:21 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 21 08:51:21 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 21 08:51:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:22 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 21 08:51:22 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 21 08:51:22 np0005590528 python3.9[118724]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:22 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 21 08:51:22 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 21 08:51:23 np0005590528 python3.9[118802]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:23 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 21 08:51:23 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 21 08:51:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:23 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 21 08:51:23 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 21 08:51:23 np0005590528 python3.9[118954]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:24 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 21 08:51:24 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 21 08:51:24 np0005590528 python3.9[119106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:25 np0005590528 python3.9[119184]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:25 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 21 08:51:25 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 21 08:51:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:26 np0005590528 python3.9[119336]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 08:51:26 np0005590528 systemd[1]: Starting Time & Date Service...
Jan 21 08:51:26 np0005590528 systemd[1]: Started Time & Date Service.
Jan 21 08:51:26 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 21 08:51:26 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 21 08:51:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:27 np0005590528 python3.9[119492]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:27 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 21 08:51:27 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 21 08:51:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:27 np0005590528 python3.9[119644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:28 np0005590528 python3.9[119722]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:29 np0005590528 python3.9[119874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:29 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 21 08:51:29 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 21 08:51:29 np0005590528 python3.9[119952]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.05m43szh recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:30 np0005590528 python3.9[120104]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:30 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 21 08:51:30 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 21 08:51:30 np0005590528 python3.9[120182]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:31 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 21 08:51:31 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 21 08:51:31 np0005590528 python3.9[120334]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:51:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:32 np0005590528 python3[120487]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 08:51:33 np0005590528 python3.9[120639]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:33 np0005590528 python3.9[120717]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:34 np0005590528 python3.9[120869]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:34 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 21 08:51:34 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 21 08:51:35 np0005590528 python3.9[120995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003493.8582091-308-149032794136228/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:35 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 21 08:51:35 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 21 08:51:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:35 np0005590528 python3.9[121147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:36 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 21 08:51:36 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 21 08:51:36 np0005590528 python3.9[121225]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:36 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 21 08:51:36 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 21 08:51:37 np0005590528 python3.9[121377]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:37 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 21 08:51:37 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 21 08:51:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:37 np0005590528 python3.9[121455]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:38 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 21 08:51:38 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 21 08:51:38 np0005590528 python3.9[121607]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:38 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 21 08:51:38 np0005590528 python3.9[121685]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:38 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 21 08:51:39 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 21 08:51:39 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 21 08:51:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:51:39
Jan 21 08:51:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:51:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:51:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.log', '.rgw.root']
Jan 21 08:51:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:51:39 np0005590528 python3.9[121837]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:51:39 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 21 08:51:39 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 21 08:51:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:40 np0005590528 python3.9[121992]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:51:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:51:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:51:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:51:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:51:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:51:41 np0005590528 python3.9[122144]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:41 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 21 08:51:41 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 21 08:51:41 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 21 08:51:41 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 21 08:51:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:41 np0005590528 python3.9[122296]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:42 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 21 08:51:42 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 21 08:51:42 np0005590528 python3.9[122448]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 08:51:42 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 21 08:51:42 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 21 08:51:43 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 21 08:51:43 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 21 08:51:43 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 21 08:51:43 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 21 08:51:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:43 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 21 08:51:43 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 21 08:51:44 np0005590528 python3.9[122600]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 08:51:44 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 21 08:51:44 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 21 08:51:44 np0005590528 systemd[1]: session-40.scope: Deactivated successfully.
Jan 21 08:51:44 np0005590528 systemd[1]: session-40.scope: Consumed 34.377s CPU time.
Jan 21 08:51:44 np0005590528 systemd-logind[780]: Session 40 logged out. Waiting for processes to exit.
Jan 21 08:51:44 np0005590528 systemd-logind[780]: Removed session 40.
Jan 21 08:51:45 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 21 08:51:45 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 21 08:51:45 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 21 08:51:45 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 21 08:51:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:46 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 21 08:51:46 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 21 08:51:47 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 21 08:51:47 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 21 08:51:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:47 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 21 08:51:47 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 21 08:51:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:50 np0005590528 systemd-logind[780]: New session 41 of user zuul.
Jan 21 08:51:50 np0005590528 systemd[1]: Started Session 41 of User zuul.
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:51:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:51:50 np0005590528 python3.9[122780]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 21 08:51:51 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 21 08:51:51 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 21 08:51:51 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 21 08:51:51 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 21 08:51:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:51 np0005590528 python3.9[122932]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:51:51 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 21 08:51:51 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 21 08:51:52 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 21 08:51:52 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 21 08:51:52 np0005590528 python3.9[123086]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 21 08:51:53 np0005590528 python3.9[123238]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.q8fk82h7 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:51:53 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 21 08:51:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:53 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 21 08:51:54 np0005590528 python3.9[123364]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.q8fk82h7 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003512.8335037-44-222600172076668/.source.q8fk82h7 _original_basename=.ux_nvr7p follow=False checksum=0f4b47c126c5fd5568004f40e6dcd1e127845364 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:54 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 21 08:51:54 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 21 08:51:54 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 21 08:51:54 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 21 08:51:55 np0005590528 python3.9[123516]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:51:55 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 21 08:51:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:55 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 21 08:51:56 np0005590528 python3.9[123668]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeFBF9sLBUut0jERuw8eMRSTmHQPq77CYOZnLVmOaBCBCSPbeUxgTSDGAypqgANDFspz2HthTRfZ/0obiaSrheRKp8JI8vmjOkZpbGmM9pA3z2/L+A3dJtYryJ7HhNyc/RGv6tDqg7CqaPNO1VlKkJaCblvoGA/sTsuLgg72/kyPlgz+xxZIIXUolJRTelowGJeLl4FZhJevZEH/0RgRZW5SIe7QgvHYRWR/yATnINpKKPRydWLgea+k//th3RGx9GuUGWuDCPeJvxRKrqAMI8uxmSm/8+i6EK0vVqkOdcdQRVsHY2r6DJ55kbxKE6zwdr/2TWUC4j2L+d8AvLLtPL6yx6yOUDHD9KicyxruiQYYwkskMnkAWJeSL1egxNDFgJCw7P56bEGIyFhPIAzxR1E0ZuAQqv/W1KYFqspYxqjsccWFRon0TW3DyHzXSXRZkvgVBAyZPlZBTcsw58X536t/6unFkYBPfaCNmQIGhaOZ0dFgK7Bl1Jj1cThi6d/bE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINb+axAz9AQLLF8DlI2l4unh/lYce78aEpf6RASalCvh#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHJ6/CEvuTJeUBrk8Nw85tSdtMYRRRBEbjPN601M+Wvbkfd6a4tr5R6VV6/ot3jZ0PwT+0BaXWVuiTlpRpxsLDo=#012 create=True mode=0644 path=/tmp/ansible.q8fk82h7 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:56 np0005590528 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 08:51:56 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 21 08:51:56 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 21 08:51:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:51:57 np0005590528 python3.9[123822]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.q8fk82h7' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:51:57 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 21 08:51:57 np0005590528 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 21 08:51:57 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 21 08:51:57 np0005590528 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 21 08:51:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:51:57 np0005590528 python3.9[123976]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.q8fk82h7 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:51:58 np0005590528 systemd[1]: session-41.scope: Deactivated successfully.
Jan 21 08:51:58 np0005590528 systemd[1]: session-41.scope: Consumed 5.457s CPU time.
Jan 21 08:51:58 np0005590528 systemd-logind[780]: Session 41 logged out. Waiting for processes to exit.
Jan 21 08:51:58 np0005590528 systemd-logind[780]: Removed session 41.
Jan 21 08:51:58 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 21 08:51:58 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 21 08:51:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:00 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 21 08:52:00 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:52:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:52:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:52:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:52:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:52:01 np0005590528 podman[124145]: 2026-01-21 13:52:01.38228561 +0000 UTC m=+0.063424220 container create a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:52:01 np0005590528 podman[124145]: 2026-01-21 13:52:01.340401193 +0000 UTC m=+0.021539813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:52:01 np0005590528 systemd[1]: Started libpod-conmon-a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037.scope.
Jan 21 08:52:01 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:52:01 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 21 08:52:01 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 21 08:52:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:01 np0005590528 podman[124145]: 2026-01-21 13:52:01.854264688 +0000 UTC m=+0.535403358 container init a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:52:01 np0005590528 podman[124145]: 2026-01-21 13:52:01.862719023 +0000 UTC m=+0.543857623 container start a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 08:52:01 np0005590528 podman[124145]: 2026-01-21 13:52:01.868860123 +0000 UTC m=+0.549998803 container attach a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 08:52:01 np0005590528 naughty_grothendieck[124161]: 167 167
Jan 21 08:52:01 np0005590528 systemd[1]: libpod-a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037.scope: Deactivated successfully.
Jan 21 08:52:01 np0005590528 podman[124145]: 2026-01-21 13:52:01.873405223 +0000 UTC m=+0.554543823 container died a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 08:52:01 np0005590528 systemd[1]: var-lib-containers-storage-overlay-edbb85369616af04b1120987fc5d5eef020e955b067e230138d92bf6a53f7fc2-merged.mount: Deactivated successfully.
Jan 21 08:52:02 np0005590528 podman[124145]: 2026-01-21 13:52:02.059668055 +0000 UTC m=+0.740806665 container remove a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:52:02 np0005590528 systemd[1]: libpod-conmon-a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037.scope: Deactivated successfully.
Jan 21 08:52:02 np0005590528 podman[124184]: 2026-01-21 13:52:02.299267561 +0000 UTC m=+0.086531452 container create 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 08:52:02 np0005590528 podman[124184]: 2026-01-21 13:52:02.24564626 +0000 UTC m=+0.032910161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:52:02 np0005590528 systemd[1]: Started libpod-conmon-379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8.scope.
Jan 21 08:52:02 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:52:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:02 np0005590528 podman[124184]: 2026-01-21 13:52:02.436153804 +0000 UTC m=+0.223417715 container init 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 08:52:02 np0005590528 podman[124184]: 2026-01-21 13:52:02.447184572 +0000 UTC m=+0.234448443 container start 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:52:02 np0005590528 podman[124184]: 2026-01-21 13:52:02.451489607 +0000 UTC m=+0.238753558 container attach 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:52:03 np0005590528 suspicious_keller[124201]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:52:03 np0005590528 suspicious_keller[124201]: --> All data devices are unavailable
Jan 21 08:52:03 np0005590528 systemd[1]: libpod-379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8.scope: Deactivated successfully.
Jan 21 08:52:03 np0005590528 podman[124184]: 2026-01-21 13:52:03.02377431 +0000 UTC m=+0.811038201 container died 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 21 08:52:03 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f-merged.mount: Deactivated successfully.
Jan 21 08:52:03 np0005590528 podman[124184]: 2026-01-21 13:52:03.077439892 +0000 UTC m=+0.864703743 container remove 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 08:52:03 np0005590528 systemd[1]: libpod-conmon-379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8.scope: Deactivated successfully.
Jan 21 08:52:03 np0005590528 systemd-logind[780]: New session 42 of user zuul.
Jan 21 08:52:03 np0005590528 systemd[1]: Started Session 42 of User zuul.
Jan 21 08:52:03 np0005590528 podman[124308]: 2026-01-21 13:52:03.550121148 +0000 UTC m=+0.052583698 container create 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:52:03 np0005590528 systemd[1]: Started libpod-conmon-4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9.scope.
Jan 21 08:52:03 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:52:03 np0005590528 podman[124308]: 2026-01-21 13:52:03.626220375 +0000 UTC m=+0.128682945 container init 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:52:03 np0005590528 podman[124308]: 2026-01-21 13:52:03.533577516 +0000 UTC m=+0.036040086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:52:03 np0005590528 podman[124308]: 2026-01-21 13:52:03.63223483 +0000 UTC m=+0.134697380 container start 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:52:03 np0005590528 podman[124308]: 2026-01-21 13:52:03.635671974 +0000 UTC m=+0.138134554 container attach 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:52:03 np0005590528 festive_booth[124368]: 167 167
Jan 21 08:52:03 np0005590528 systemd[1]: libpod-4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9.scope: Deactivated successfully.
Jan 21 08:52:03 np0005590528 podman[124308]: 2026-01-21 13:52:03.637636712 +0000 UTC m=+0.140099262 container died 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:52:03 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2e37fcf2465918fd5a8cacbfa30ee02d7f334befa3114ecca5fbe27fa6e19a4d-merged.mount: Deactivated successfully.
Jan 21 08:52:03 np0005590528 podman[124308]: 2026-01-21 13:52:03.676565127 +0000 UTC m=+0.179027677 container remove 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 08:52:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:03 np0005590528 systemd[1]: libpod-conmon-4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9.scope: Deactivated successfully.
Jan 21 08:52:03 np0005590528 podman[124391]: 2026-01-21 13:52:03.845322813 +0000 UTC m=+0.048317303 container create 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 08:52:03 np0005590528 systemd[1]: Started libpod-conmon-78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19.scope.
Jan 21 08:52:03 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:52:03 np0005590528 podman[124391]: 2026-01-21 13:52:03.824440676 +0000 UTC m=+0.027435176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:52:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e6071dc2800dc09a1d8faf6456e4346ed05e912e6fa0d4bedc3d6a615e0dd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e6071dc2800dc09a1d8faf6456e4346ed05e912e6fa0d4bedc3d6a615e0dd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e6071dc2800dc09a1d8faf6456e4346ed05e912e6fa0d4bedc3d6a615e0dd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e6071dc2800dc09a1d8faf6456e4346ed05e912e6fa0d4bedc3d6a615e0dd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:03 np0005590528 podman[124391]: 2026-01-21 13:52:03.936537898 +0000 UTC m=+0.139532398 container init 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 08:52:03 np0005590528 podman[124391]: 2026-01-21 13:52:03.950105707 +0000 UTC m=+0.153100197 container start 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 08:52:03 np0005590528 podman[124391]: 2026-01-21 13:52:03.953969561 +0000 UTC m=+0.156964061 container attach 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]: {
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:    "0": [
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:        {
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "devices": [
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "/dev/loop3"
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            ],
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_name": "ceph_lv0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_size": "21470642176",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "name": "ceph_lv0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "tags": {
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.cluster_name": "ceph",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.crush_device_class": "",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.encrypted": "0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.objectstore": "bluestore",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.osd_id": "0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.type": "block",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.vdo": "0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.with_tpm": "0"
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            },
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "type": "block",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "vg_name": "ceph_vg0"
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:        }
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:    ],
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:    "1": [
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:        {
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "devices": [
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "/dev/loop4"
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            ],
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_name": "ceph_lv1",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_size": "21470642176",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "name": "ceph_lv1",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "tags": {
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.cluster_name": "ceph",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.crush_device_class": "",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.encrypted": "0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.objectstore": "bluestore",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.osd_id": "1",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.type": "block",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.vdo": "0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.with_tpm": "0"
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            },
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "type": "block",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "vg_name": "ceph_vg1"
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:        }
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:    ],
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:    "2": [
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:        {
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "devices": [
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "/dev/loop5"
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            ],
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_name": "ceph_lv2",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_size": "21470642176",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "name": "ceph_lv2",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "tags": {
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.cluster_name": "ceph",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.crush_device_class": "",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.encrypted": "0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.objectstore": "bluestore",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.osd_id": "2",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.type": "block",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.vdo": "0",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:                "ceph.with_tpm": "0"
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            },
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "type": "block",
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:            "vg_name": "ceph_vg2"
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:        }
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]:    ]
Jan 21 08:52:04 np0005590528 naughty_hopper[124408]: }
Jan 21 08:52:04 np0005590528 systemd[1]: libpod-78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19.scope: Deactivated successfully.
Jan 21 08:52:04 np0005590528 podman[124513]: 2026-01-21 13:52:04.286838842 +0000 UTC m=+0.021878442 container died 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:52:04 np0005590528 systemd[1]: var-lib-containers-storage-overlay-a2e6071dc2800dc09a1d8faf6456e4346ed05e912e6fa0d4bedc3d6a615e0dd2-merged.mount: Deactivated successfully.
Jan 21 08:52:04 np0005590528 podman[124513]: 2026-01-21 13:52:04.332322896 +0000 UTC m=+0.067362466 container remove 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:52:04 np0005590528 systemd[1]: libpod-conmon-78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19.scope: Deactivated successfully.
Jan 21 08:52:04 np0005590528 python3.9[124510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:52:04 np0005590528 podman[124617]: 2026-01-21 13:52:04.761083594 +0000 UTC m=+0.043324233 container create 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 08:52:04 np0005590528 systemd[1]: Started libpod-conmon-858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067.scope.
Jan 21 08:52:04 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:52:04 np0005590528 podman[124617]: 2026-01-21 13:52:04.832838397 +0000 UTC m=+0.115079056 container init 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:52:04 np0005590528 podman[124617]: 2026-01-21 13:52:04.740817023 +0000 UTC m=+0.023057712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:52:04 np0005590528 podman[124617]: 2026-01-21 13:52:04.839305453 +0000 UTC m=+0.121546092 container start 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:52:04 np0005590528 podman[124617]: 2026-01-21 13:52:04.842655465 +0000 UTC m=+0.124896114 container attach 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 21 08:52:04 np0005590528 affectionate_snyder[124656]: 167 167
Jan 21 08:52:04 np0005590528 systemd[1]: libpod-858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067.scope: Deactivated successfully.
Jan 21 08:52:04 np0005590528 podman[124617]: 2026-01-21 13:52:04.846081048 +0000 UTC m=+0.128321727 container died 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 08:52:04 np0005590528 systemd[1]: var-lib-containers-storage-overlay-fba9a8a4c3e62d7b920d738a964ca9de20bda0f14fc00b8c22e0aa1e49284d35-merged.mount: Deactivated successfully.
Jan 21 08:52:04 np0005590528 podman[124617]: 2026-01-21 13:52:04.881528438 +0000 UTC m=+0.163769077 container remove 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:52:04 np0005590528 systemd[1]: libpod-conmon-858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067.scope: Deactivated successfully.
Jan 21 08:52:05 np0005590528 podman[124709]: 2026-01-21 13:52:05.076887211 +0000 UTC m=+0.061577786 container create a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 08:52:05 np0005590528 systemd[1]: Started libpod-conmon-a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af.scope.
Jan 21 08:52:05 np0005590528 podman[124709]: 2026-01-21 13:52:05.046075323 +0000 UTC m=+0.030765958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:52:05 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:52:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52be84a38543593e60d9832956efd4f698865009d4ede44cea4a661c01fb94e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52be84a38543593e60d9832956efd4f698865009d4ede44cea4a661c01fb94e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52be84a38543593e60d9832956efd4f698865009d4ede44cea4a661c01fb94e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52be84a38543593e60d9832956efd4f698865009d4ede44cea4a661c01fb94e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:52:05 np0005590528 podman[124709]: 2026-01-21 13:52:05.168754352 +0000 UTC m=+0.153444887 container init a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 08:52:05 np0005590528 podman[124709]: 2026-01-21 13:52:05.175018763 +0000 UTC m=+0.159709338 container start a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 08:52:05 np0005590528 podman[124709]: 2026-01-21 13:52:05.179273407 +0000 UTC m=+0.163963942 container attach a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 08:52:05 np0005590528 python3.9[124816]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 08:52:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:05 np0005590528 lvm[124911]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:52:05 np0005590528 lvm[124908]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:52:05 np0005590528 lvm[124908]: VG ceph_vg0 finished
Jan 21 08:52:05 np0005590528 lvm[124911]: VG ceph_vg1 finished
Jan 21 08:52:05 np0005590528 lvm[124921]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:52:05 np0005590528 lvm[124921]: VG ceph_vg2 finished
Jan 21 08:52:05 np0005590528 bold_villani[124726]: {}
Jan 21 08:52:05 np0005590528 systemd[1]: libpod-a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af.scope: Deactivated successfully.
Jan 21 08:52:05 np0005590528 systemd[1]: libpod-a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af.scope: Consumed 1.312s CPU time.
Jan 21 08:52:05 np0005590528 podman[124709]: 2026-01-21 13:52:05.997818588 +0000 UTC m=+0.982509123 container died a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 08:52:06 np0005590528 systemd[1]: var-lib-containers-storage-overlay-52be84a38543593e60d9832956efd4f698865009d4ede44cea4a661c01fb94e4-merged.mount: Deactivated successfully.
Jan 21 08:52:06 np0005590528 podman[124709]: 2026-01-21 13:52:06.056539824 +0000 UTC m=+1.041230389 container remove a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 08:52:06 np0005590528 systemd[1]: libpod-conmon-a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af.scope: Deactivated successfully.
Jan 21 08:52:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:52:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:52:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:52:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:52:06 np0005590528 python3.9[125075]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:52:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:52:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:52:07 np0005590528 python3.9[125228]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:52:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:08 np0005590528 python3.9[125381]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:52:09 np0005590528 python3.9[125533]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:09 np0005590528 systemd[1]: session-42.scope: Deactivated successfully.
Jan 21 08:52:09 np0005590528 systemd[1]: session-42.scope: Consumed 3.909s CPU time.
Jan 21 08:52:09 np0005590528 systemd-logind[780]: Session 42 logged out. Waiting for processes to exit.
Jan 21 08:52:09 np0005590528 systemd-logind[780]: Removed session 42.
Jan 21 08:52:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:52:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:52:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:52:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:52:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:52:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:52:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:14 np0005590528 systemd-logind[780]: New session 43 of user zuul.
Jan 21 08:52:14 np0005590528 systemd[1]: Started Session 43 of User zuul.
Jan 21 08:52:14 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 21 08:52:14 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 21 08:52:15 np0005590528 python3.9[125711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:52:15 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 21 08:52:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:15 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 21 08:52:16 np0005590528 python3.9[125867]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:52:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:17 np0005590528 python3.9[125951]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 08:52:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:18 np0005590528 systemd[1]: session-18.scope: Deactivated successfully.
Jan 21 08:52:18 np0005590528 systemd[1]: session-18.scope: Consumed 1min 37.243s CPU time.
Jan 21 08:52:18 np0005590528 systemd-logind[780]: Session 18 logged out. Waiting for processes to exit.
Jan 21 08:52:18 np0005590528 systemd-logind[780]: Removed session 18.
Jan 21 08:52:18 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 21 08:52:18 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 21 08:52:19 np0005590528 python3.9[126102]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:52:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:20 np0005590528 python3.9[126253]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 08:52:21 np0005590528 python3.9[126403]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:52:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:22 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 21 08:52:22 np0005590528 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 21 08:52:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:23 np0005590528 python3.9[126553]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:52:23 np0005590528 systemd[1]: session-43.scope: Deactivated successfully.
Jan 21 08:52:23 np0005590528 systemd[1]: session-43.scope: Consumed 6.077s CPU time.
Jan 21 08:52:23 np0005590528 systemd-logind[780]: Session 43 logged out. Waiting for processes to exit.
Jan 21 08:52:23 np0005590528 systemd-logind[780]: Removed session 43.
Jan 21 08:52:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:28 np0005590528 systemd-logind[780]: New session 44 of user zuul.
Jan 21 08:52:28 np0005590528 systemd[1]: Started Session 44 of User zuul.
Jan 21 08:52:29 np0005590528 python3.9[126731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:52:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:31 np0005590528 python3.9[126887]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:31 np0005590528 python3.9[127039]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:32 np0005590528 python3.9[127191]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:33 np0005590528 python3.9[127314]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003552.181891-60-216928368491860/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f0cd73048907ee5aa263a36f175e7caed7c19b62 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:34 np0005590528 python3.9[127466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:34 np0005590528 python3.9[127589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003553.7422426-60-114502039322093/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=621aaf2369cfa3ba36d9b9152ccf63ed8c29f884 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:35 np0005590528 python3.9[127741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:36 np0005590528 python3.9[127864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003555.063375-60-76524087930483/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4a0995998f2033012989bc7ba668121e800b17e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:36 np0005590528 python3.9[128016]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:37 np0005590528 python3.9[128169]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:38 np0005590528 python3.9[128321]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:38 np0005590528 python3.9[128444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003557.7472303-119-167956625773194/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=87373f7552cc80bec6e873c7b99dfa988b06eeea backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:39 np0005590528 python3.9[128596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:52:39
Jan 21 08:52:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:52:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:52:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'volumes', 'vms', 'backups', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 21 08:52:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:52:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:39 np0005590528 python3.9[128719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003558.963925-119-252127865520531/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4e90a8a3f55f20db41805edac4f667e0b6bdddc6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:40 np0005590528 python3.9[128871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:52:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:52:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:52:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:52:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:52:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:52:41 np0005590528 python3.9[128994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003560.1222758-119-226576297937512/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e8eb98a943f8b191d5c4163b48c22ed2f7d7cf7c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:41 np0005590528 python3.9[129146]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:42 np0005590528 python3.9[129298]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:43 np0005590528 python3.9[129450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:43 np0005590528 python3.9[129573]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003562.7096884-178-67789584894166/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=53aa8456c1d094ebc5c2b2d2ac43375382ab931d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:44 np0005590528 python3.9[129725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:47 np0005590528 python3.9[129848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003563.9778585-178-10773835385543/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4e90a8a3f55f20db41805edac4f667e0b6bdddc6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:47 np0005590528 python3.9[130000]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:48 np0005590528 python3.9[130123]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003567.1920822-178-630904898664/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ab87c69949efff77275cd43a8f928010d3306784 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:49 np0005590528 python3.9[130275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:50 np0005590528 python3.9[130427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:52:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:52:50 np0005590528 python3.9[130550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003569.677586-246-29049587607302/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:51 np0005590528 python3.9[130702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:52 np0005590528 python3.9[130854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:52 np0005590528 python3.9[130977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003571.7837162-270-155204276527152/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:53 np0005590528 python3.9[131129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:54 np0005590528 python3.9[131281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:55 np0005590528 python3.9[131404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003573.8766284-294-220471936710064/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:55 np0005590528 python3.9[131556]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:56 np0005590528 python3.9[131708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:57 np0005590528 python3.9[131831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003575.989661-318-220574947470297/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:57 np0005590528 python3.9[131983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:52:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:52:58 np0005590528 python3.9[132135]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:52:59 np0005590528 python3.9[132258]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003577.9658418-342-11136171326849/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:52:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:52:59 np0005590528 python3.9[132410]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:53:00 np0005590528 python3.9[132562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:01 np0005590528 python3.9[132685]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003580.1260636-366-227236086502854/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:01 np0005590528 systemd[1]: session-44.scope: Deactivated successfully.
Jan 21 08:53:01 np0005590528 systemd[1]: session-44.scope: Consumed 24.413s CPU time.
Jan 21 08:53:01 np0005590528 systemd-logind[780]: Session 44 logged out. Waiting for processes to exit.
Jan 21 08:53:01 np0005590528 systemd-logind[780]: Removed session 44.
Jan 21 08:53:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:53:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:53:07 np0005590528 systemd-logind[780]: New session 45 of user zuul.
Jan 21 08:53:07 np0005590528 systemd[1]: Started Session 45 of User zuul.
Jan 21 08:53:07 np0005590528 podman[132908]: 2026-01-21 13:53:07.471393132 +0000 UTC m=+0.049091973 container create 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:53:07 np0005590528 systemd[1]: Started libpod-conmon-69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78.scope.
Jan 21 08:53:07 np0005590528 podman[132908]: 2026-01-21 13:53:07.450916288 +0000 UTC m=+0.028615119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:53:07 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:53:07 np0005590528 podman[132908]: 2026-01-21 13:53:07.567979607 +0000 UTC m=+0.145678428 container init 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:53:07 np0005590528 podman[132908]: 2026-01-21 13:53:07.575148487 +0000 UTC m=+0.152847298 container start 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 08:53:07 np0005590528 podman[132908]: 2026-01-21 13:53:07.579018129 +0000 UTC m=+0.156716960 container attach 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 08:53:07 np0005590528 vigorous_spence[132924]: 167 167
Jan 21 08:53:07 np0005590528 systemd[1]: libpod-69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78.scope: Deactivated successfully.
Jan 21 08:53:07 np0005590528 podman[132908]: 2026-01-21 13:53:07.58287431 +0000 UTC m=+0.160573121 container died 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 08:53:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:53:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:53:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:53:07 np0005590528 systemd[1]: var-lib-containers-storage-overlay-5ee0f1ec70c0473925cd0c83b45234f4df4f33fa81682780aac759494a9b46da-merged.mount: Deactivated successfully.
Jan 21 08:53:07 np0005590528 podman[132908]: 2026-01-21 13:53:07.717401374 +0000 UTC m=+0.295100225 container remove 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Jan 21 08:53:07 np0005590528 systemd[1]: libpod-conmon-69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78.scope: Deactivated successfully.
Jan 21 08:53:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:07 np0005590528 podman[133019]: 2026-01-21 13:53:07.885340868 +0000 UTC m=+0.038653706 container create c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:53:07 np0005590528 systemd[1]: Started libpod-conmon-c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf.scope.
Jan 21 08:53:07 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:53:07 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:07 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:07 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:07 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:07 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:07 np0005590528 podman[133019]: 2026-01-21 13:53:07.867737601 +0000 UTC m=+0.021050449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:53:07 np0005590528 podman[133019]: 2026-01-21 13:53:07.978939824 +0000 UTC m=+0.132252652 container init c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:53:07 np0005590528 podman[133019]: 2026-01-21 13:53:07.986322588 +0000 UTC m=+0.139635406 container start c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 08:53:07 np0005590528 podman[133019]: 2026-01-21 13:53:07.989679147 +0000 UTC m=+0.142991965 container attach c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 08:53:08 np0005590528 python3.9[133061]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:08 np0005590528 wizardly_dijkstra[133064]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:53:08 np0005590528 wizardly_dijkstra[133064]: --> All data devices are unavailable
Jan 21 08:53:08 np0005590528 systemd[1]: libpod-c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf.scope: Deactivated successfully.
Jan 21 08:53:08 np0005590528 podman[133019]: 2026-01-21 13:53:08.496872591 +0000 UTC m=+0.650185439 container died c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 08:53:08 np0005590528 systemd[1]: var-lib-containers-storage-overlay-99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f-merged.mount: Deactivated successfully.
Jan 21 08:53:08 np0005590528 podman[133019]: 2026-01-21 13:53:08.555329304 +0000 UTC m=+0.708642122 container remove c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 08:53:08 np0005590528 systemd[1]: libpod-conmon-c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf.scope: Deactivated successfully.
Jan 21 08:53:08 np0005590528 python3.9[133298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:09 np0005590528 podman[133324]: 2026-01-21 13:53:09.081851774 +0000 UTC m=+0.056033117 container create e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:53:09 np0005590528 systemd[1]: Started libpod-conmon-e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609.scope.
Jan 21 08:53:09 np0005590528 podman[133324]: 2026-01-21 13:53:09.04868395 +0000 UTC m=+0.022865333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:53:09 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:53:09 np0005590528 podman[133324]: 2026-01-21 13:53:09.165652637 +0000 UTC m=+0.139833960 container init e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 08:53:09 np0005590528 podman[133324]: 2026-01-21 13:53:09.176226378 +0000 UTC m=+0.150407681 container start e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 08:53:09 np0005590528 podman[133324]: 2026-01-21 13:53:09.179773952 +0000 UTC m=+0.153955245 container attach e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 08:53:09 np0005590528 affectionate_bardeen[133374]: 167 167
Jan 21 08:53:09 np0005590528 systemd[1]: libpod-e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609.scope: Deactivated successfully.
Jan 21 08:53:09 np0005590528 podman[133324]: 2026-01-21 13:53:09.184434723 +0000 UTC m=+0.158616056 container died e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 08:53:09 np0005590528 systemd[1]: var-lib-containers-storage-overlay-bf464fa6c25bbff2ff37d26bdcab941b1c234f363ed3a817e1a5bb8785e86ca1-merged.mount: Deactivated successfully.
Jan 21 08:53:09 np0005590528 podman[133324]: 2026-01-21 13:53:09.225725589 +0000 UTC m=+0.199906892 container remove e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:53:09 np0005590528 systemd[1]: libpod-conmon-e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609.scope: Deactivated successfully.
Jan 21 08:53:09 np0005590528 podman[133405]: 2026-01-21 13:53:09.40447876 +0000 UTC m=+0.045627281 container create 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 08:53:09 np0005590528 systemd[1]: Started libpod-conmon-8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29.scope.
Jan 21 08:53:09 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:53:09 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aab7ead8d3f7209c7ff5c76cf51e75f22a09486e06bfc0bc6cace916aa709bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:09 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aab7ead8d3f7209c7ff5c76cf51e75f22a09486e06bfc0bc6cace916aa709bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:09 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aab7ead8d3f7209c7ff5c76cf51e75f22a09486e06bfc0bc6cace916aa709bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:09 np0005590528 podman[133405]: 2026-01-21 13:53:09.38675129 +0000 UTC m=+0.027899821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:53:09 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aab7ead8d3f7209c7ff5c76cf51e75f22a09486e06bfc0bc6cace916aa709bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:09 np0005590528 podman[133405]: 2026-01-21 13:53:09.492329279 +0000 UTC m=+0.133477810 container init 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 08:53:09 np0005590528 podman[133405]: 2026-01-21 13:53:09.505656784 +0000 UTC m=+0.146805295 container start 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:53:09 np0005590528 podman[133405]: 2026-01-21 13:53:09.508536872 +0000 UTC m=+0.149685383 container attach 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 21 08:53:09 np0005590528 python3.9[133495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003588.3045938-29-171429541803179/.source.conf _original_basename=ceph.conf follow=False checksum=d208d2e00ec30de06826064adf7fede1b3379f31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:09 np0005590528 hungry_payne[133451]: {
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:    "0": [
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:        {
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "devices": [
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "/dev/loop3"
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            ],
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_name": "ceph_lv0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_size": "21470642176",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "name": "ceph_lv0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "tags": {
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.cluster_name": "ceph",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.crush_device_class": "",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.encrypted": "0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.objectstore": "bluestore",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.osd_id": "0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.type": "block",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.vdo": "0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.with_tpm": "0"
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            },
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "type": "block",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "vg_name": "ceph_vg0"
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:        }
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:    ],
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:    "1": [
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:        {
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "devices": [
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "/dev/loop4"
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            ],
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_name": "ceph_lv1",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_size": "21470642176",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "name": "ceph_lv1",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "tags": {
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.cluster_name": "ceph",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.crush_device_class": "",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.encrypted": "0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.objectstore": "bluestore",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.osd_id": "1",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.type": "block",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.vdo": "0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.with_tpm": "0"
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            },
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "type": "block",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "vg_name": "ceph_vg1"
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:        }
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:    ],
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:    "2": [
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:        {
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "devices": [
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "/dev/loop5"
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            ],
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_name": "ceph_lv2",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_size": "21470642176",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "name": "ceph_lv2",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "tags": {
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.cluster_name": "ceph",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.crush_device_class": "",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.encrypted": "0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.objectstore": "bluestore",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.osd_id": "2",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.type": "block",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.vdo": "0",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:                "ceph.with_tpm": "0"
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            },
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "type": "block",
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:            "vg_name": "ceph_vg2"
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:        }
Jan 21 08:53:09 np0005590528 hungry_payne[133451]:    ]
Jan 21 08:53:09 np0005590528 hungry_payne[133451]: }
Jan 21 08:53:09 np0005590528 systemd[1]: libpod-8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29.scope: Deactivated successfully.
Jan 21 08:53:09 np0005590528 podman[133405]: 2026-01-21 13:53:09.834900636 +0000 UTC m=+0.476049157 container died 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:53:09 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9aab7ead8d3f7209c7ff5c76cf51e75f22a09486e06bfc0bc6cace916aa709bc-merged.mount: Deactivated successfully.
Jan 21 08:53:09 np0005590528 podman[133405]: 2026-01-21 13:53:09.888647248 +0000 UTC m=+0.529795769 container remove 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:53:09 np0005590528 systemd[1]: libpod-conmon-8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29.scope: Deactivated successfully.
Jan 21 08:53:10 np0005590528 podman[133725]: 2026-01-21 13:53:10.347330153 +0000 UTC m=+0.039731531 container create 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 08:53:10 np0005590528 python3.9[133712]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:10 np0005590528 systemd[1]: Started libpod-conmon-905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46.scope.
Jan 21 08:53:10 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:53:10 np0005590528 podman[133725]: 2026-01-21 13:53:10.420542496 +0000 UTC m=+0.112943884 container init 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:53:10 np0005590528 podman[133725]: 2026-01-21 13:53:10.328712692 +0000 UTC m=+0.021114090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:53:10 np0005590528 podman[133725]: 2026-01-21 13:53:10.426600339 +0000 UTC m=+0.119001717 container start 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 08:53:10 np0005590528 podman[133725]: 2026-01-21 13:53:10.430256575 +0000 UTC m=+0.122657983 container attach 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:53:10 np0005590528 jovial_cori[133741]: 167 167
Jan 21 08:53:10 np0005590528 systemd[1]: libpod-905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46.scope: Deactivated successfully.
Jan 21 08:53:10 np0005590528 podman[133725]: 2026-01-21 13:53:10.431477134 +0000 UTC m=+0.123878512 container died 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 08:53:10 np0005590528 systemd[1]: var-lib-containers-storage-overlay-96074004c301c3eeaecfa1a9a92483b8293da97581cfeeaf493b2bdbc57333bd-merged.mount: Deactivated successfully.
Jan 21 08:53:10 np0005590528 podman[133725]: 2026-01-21 13:53:10.466523303 +0000 UTC m=+0.158924681 container remove 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 08:53:10 np0005590528 systemd[1]: libpod-conmon-905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46.scope: Deactivated successfully.
Jan 21 08:53:10 np0005590528 podman[133824]: 2026-01-21 13:53:10.630477553 +0000 UTC m=+0.041726167 container create 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:53:10 np0005590528 systemd[1]: Started libpod-conmon-59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf.scope.
Jan 21 08:53:10 np0005590528 podman[133824]: 2026-01-21 13:53:10.61298009 +0000 UTC m=+0.024228724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:53:10 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:53:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ec6e2ce2b8f64fcd7dff24d55f28a3763f730a7a8fb2b31890972ccd6196e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ec6e2ce2b8f64fcd7dff24d55f28a3763f730a7a8fb2b31890972ccd6196e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ec6e2ce2b8f64fcd7dff24d55f28a3763f730a7a8fb2b31890972ccd6196e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ec6e2ce2b8f64fcd7dff24d55f28a3763f730a7a8fb2b31890972ccd6196e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:53:10 np0005590528 podman[133824]: 2026-01-21 13:53:10.750115025 +0000 UTC m=+0.161363659 container init 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:53:10 np0005590528 podman[133824]: 2026-01-21 13:53:10.759309392 +0000 UTC m=+0.170558006 container start 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 08:53:10 np0005590528 podman[133824]: 2026-01-21 13:53:10.764984717 +0000 UTC m=+0.176233331 container attach 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:53:10 np0005590528 python3.9[133908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003589.9104726-29-216954102189545/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=01672c665cebe1978e709c2eff9d48fb31c7992e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:53:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:53:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:53:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:53:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:53:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:53:11 np0005590528 systemd[1]: session-45.scope: Deactivated successfully.
Jan 21 08:53:11 np0005590528 systemd[1]: session-45.scope: Consumed 2.697s CPU time.
Jan 21 08:53:11 np0005590528 systemd-logind[780]: Session 45 logged out. Waiting for processes to exit.
Jan 21 08:53:11 np0005590528 systemd-logind[780]: Removed session 45.
Jan 21 08:53:11 np0005590528 lvm[134008]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:53:11 np0005590528 lvm[134008]: VG ceph_vg1 finished
Jan 21 08:53:11 np0005590528 lvm[134007]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:53:11 np0005590528 lvm[134007]: VG ceph_vg0 finished
Jan 21 08:53:11 np0005590528 lvm[134010]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:53:11 np0005590528 lvm[134010]: VG ceph_vg2 finished
Jan 21 08:53:11 np0005590528 hungry_rhodes[133876]: {}
Jan 21 08:53:11 np0005590528 systemd[1]: libpod-59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf.scope: Deactivated successfully.
Jan 21 08:53:11 np0005590528 systemd[1]: libpod-59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf.scope: Consumed 1.382s CPU time.
Jan 21 08:53:11 np0005590528 podman[133824]: 2026-01-21 13:53:11.599449276 +0000 UTC m=+1.010697890 container died 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:53:11 np0005590528 systemd[1]: var-lib-containers-storage-overlay-62ec6e2ce2b8f64fcd7dff24d55f28a3763f730a7a8fb2b31890972ccd6196e1-merged.mount: Deactivated successfully.
Jan 21 08:53:11 np0005590528 podman[133824]: 2026-01-21 13:53:11.652840819 +0000 UTC m=+1.064089433 container remove 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 21 08:53:11 np0005590528 systemd[1]: libpod-conmon-59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf.scope: Deactivated successfully.
Jan 21 08:53:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:53:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:53:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:53:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:53:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:53:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:53:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:17 np0005590528 systemd-logind[780]: New session 46 of user zuul.
Jan 21 08:53:17 np0005590528 systemd[1]: Started Session 46 of User zuul.
Jan 21 08:53:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:18 np0005590528 python3.9[134203]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:53:19 np0005590528 python3.9[134359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:53:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:20 np0005590528 python3.9[134511]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:53:20 np0005590528 python3.9[134661]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:53:21 np0005590528 python3.9[134813]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 21 08:53:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:23 np0005590528 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 21 08:53:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:23 np0005590528 python3.9[134969]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:53:24 np0005590528 python3.9[135053]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:53:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:26 np0005590528 python3.9[135206]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 08:53:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:27 np0005590528 python3[135361]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.288616) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608288710, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1788, "num_deletes": 250, "total_data_size": 2483619, "memory_usage": 2536824, "flush_reason": "Manual Compaction"}
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608306512, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1476589, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7288, "largest_seqno": 9075, "table_properties": {"data_size": 1470694, "index_size": 2650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 18076, "raw_average_key_size": 21, "raw_value_size": 1456420, "raw_average_value_size": 1711, "num_data_blocks": 124, "num_entries": 851, "num_filter_entries": 851, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003453, "oldest_key_time": 1769003453, "file_creation_time": 1769003608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 17950 microseconds, and 8483 cpu microseconds.
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.306579) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1476589 bytes OK
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.306599) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.308094) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.308115) EVENT_LOG_v1 {"time_micros": 1769003608308109, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.308139) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2475581, prev total WAL file size 2475581, number of live WAL files 2.
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.309155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1441KB)], [20(7595KB)]
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608309239, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9254332, "oldest_snapshot_seqno": -1}
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3416 keys, 7252103 bytes, temperature: kUnknown
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608371471, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7252103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7226063, "index_size": 16394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81552, "raw_average_key_size": 23, "raw_value_size": 7161202, "raw_average_value_size": 2096, "num_data_blocks": 726, "num_entries": 3416, "num_filter_entries": 3416, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769003608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.371773) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7252103 bytes
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.375172) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.4 rd, 116.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.4 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 3856, records dropped: 440 output_compression: NoCompression
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.375196) EVENT_LOG_v1 {"time_micros": 1769003608375184, "job": 6, "event": "compaction_finished", "compaction_time_micros": 62364, "compaction_time_cpu_micros": 19747, "output_level": 6, "num_output_files": 1, "total_output_size": 7252103, "num_input_records": 3856, "num_output_records": 3416, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608375530, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608377021, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.309020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.377095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.377101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.377103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.377104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:53:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.377105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:53:28 np0005590528 python3.9[135513]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:29 np0005590528 python3.9[135665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:30 np0005590528 python3.9[135743]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:30 np0005590528 python3.9[135895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:31 np0005590528 python3.9[135973]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yw5b2sxy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:31 np0005590528 python3.9[136125]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:32 np0005590528 python3.9[136203]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:33 np0005590528 python3.9[136355]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:53:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:34 np0005590528 python3[136508]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 08:53:35 np0005590528 python3.9[136660]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:35 np0005590528 python3.9[136785]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003614.4467416-152-96310264957561/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:36 np0005590528 python3.9[136937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:37 np0005590528 python3.9[137062]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003616.054529-167-194603325785554/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:37 np0005590528 python3.9[137214]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:38 np0005590528 python3.9[137339]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003617.4118094-182-114681095382234/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:39 np0005590528 python3.9[137491]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:53:39
Jan 21 08:53:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:53:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:53:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta']
Jan 21 08:53:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:53:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:39 np0005590528 python3.9[137616]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003618.7606864-197-103150635357382/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:40 np0005590528 python3.9[137768]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:53:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:53:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:53:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:53:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:53:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:53:41 np0005590528 python3.9[137893]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003620.0412009-212-196556252261035/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:41 np0005590528 python3.9[138045]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:42 np0005590528 python3.9[138197]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:53:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:43 np0005590528 python3.9[138352]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:44 np0005590528 python3.9[138504]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:53:44 np0005590528 python3.9[138657]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:53:45 np0005590528 python3.9[138811]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:53:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:46 np0005590528 python3.9[138966]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:47 np0005590528 python3.9[139116]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:53:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:48 np0005590528 python3.9[139269]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:53:48 np0005590528 ovs-vsctl[139270]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 21 08:53:49 np0005590528 python3.9[139422]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:53:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:49 np0005590528 python3.9[139577]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:53:49 np0005590528 ovs-vsctl[139578]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:53:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:53:50 np0005590528 python3.9[139728]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:53:51 np0005590528 python3.9[139882]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:53:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:52 np0005590528 python3.9[140034]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:52 np0005590528 python3.9[140112]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:53:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:53 np0005590528 python3.9[140264]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:54 np0005590528 python3.9[140342]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:53:54 np0005590528 python3.9[140494]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:55 np0005590528 python3.9[140646]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:55 np0005590528 python3.9[140724]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:56 np0005590528 python3.9[140876]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:57 np0005590528 python3.9[140954]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:53:57 np0005590528 python3.9[141106]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:53:57 np0005590528 systemd[1]: Reloading.
Jan 21 08:53:57 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:53:57 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:53:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:53:58 np0005590528 python3.9[141296]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:53:59 np0005590528 python3.9[141374]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:53:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:00 np0005590528 python3.9[141526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:00 np0005590528 python3.9[141604]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:54:01 np0005590528 python3.9[141756]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:54:01 np0005590528 systemd[1]: Reloading.
Jan 21 08:54:01 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:54:01 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:54:01 np0005590528 systemd[1]: Starting Create netns directory...
Jan 21 08:54:01 np0005590528 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 08:54:01 np0005590528 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 08:54:01 np0005590528 systemd[1]: Finished Create netns directory.
Jan 21 08:54:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:02 np0005590528 python3.9[141950]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:03 np0005590528 python3.9[142102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:03 np0005590528 python3.9[142225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003642.7039864-463-29500110450385/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:04 np0005590528 python3.9[142377]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:54:05 np0005590528 python3.9[142529]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:06 np0005590528 python3.9[142681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:06 np0005590528 python3.9[142804]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003645.7288966-496-220393008982330/.source.json _original_basename=.sc0j3sg2 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:54:07 np0005590528 python3.9[142954]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:54:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:09 np0005590528 python3.9[143377]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 21 08:54:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:10 np0005590528 python3.9[143529]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 08:54:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:54:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:54:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:54:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:54:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:54:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:54:11 np0005590528 python3[143681]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 08:54:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:54:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:54:13 np0005590528 podman[143870]: 2026-01-21 13:54:13.095664756 +0000 UTC m=+0.052534621 container create a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:54:13 np0005590528 systemd[1]: Started libpod-conmon-a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d.scope.
Jan 21 08:54:13 np0005590528 podman[143870]: 2026-01-21 13:54:13.066728207 +0000 UTC m=+0.023598092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:54:13 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:54:13 np0005590528 podman[143870]: 2026-01-21 13:54:13.196403613 +0000 UTC m=+0.153273498 container init a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:54:13 np0005590528 podman[143870]: 2026-01-21 13:54:13.20425881 +0000 UTC m=+0.161128675 container start a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:54:13 np0005590528 thirsty_pare[143886]: 167 167
Jan 21 08:54:13 np0005590528 systemd[1]: libpod-a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d.scope: Deactivated successfully.
Jan 21 08:54:13 np0005590528 podman[143870]: 2026-01-21 13:54:13.210453437 +0000 UTC m=+0.167323322 container attach a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 08:54:13 np0005590528 podman[143870]: 2026-01-21 13:54:13.211716998 +0000 UTC m=+0.168586863 container died a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 08:54:13 np0005590528 systemd[1]: var-lib-containers-storage-overlay-5ec9c70bbbaa63249967bce316513c759d426101fd4cf0d7d531bee23d807210-merged.mount: Deactivated successfully.
Jan 21 08:54:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:13 np0005590528 podman[143870]: 2026-01-21 13:54:13.312724831 +0000 UTC m=+0.269594696 container remove a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 21 08:54:13 np0005590528 systemd[1]: libpod-conmon-a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d.scope: Deactivated successfully.
Jan 21 08:54:13 np0005590528 podman[143911]: 2026-01-21 13:54:13.449162107 +0000 UTC m=+0.029745259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:54:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:54:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:54:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:54:13 np0005590528 podman[143911]: 2026-01-21 13:54:13.551289776 +0000 UTC m=+0.131872908 container create 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:54:13 np0005590528 systemd[1]: Started libpod-conmon-32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e.scope.
Jan 21 08:54:13 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:54:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:13 np0005590528 podman[143911]: 2026-01-21 13:54:13.646472871 +0000 UTC m=+0.227056013 container init 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:54:13 np0005590528 podman[143911]: 2026-01-21 13:54:13.656997532 +0000 UTC m=+0.237580664 container start 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:54:13 np0005590528 podman[143911]: 2026-01-21 13:54:13.660714091 +0000 UTC m=+0.241297233 container attach 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 08:54:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:14 np0005590528 stupefied_pare[143927]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:54:14 np0005590528 stupefied_pare[143927]: --> All data devices are unavailable
Jan 21 08:54:14 np0005590528 systemd[1]: libpod-32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e.scope: Deactivated successfully.
Jan 21 08:54:14 np0005590528 conmon[143927]: conmon 32132a26dd1c888b50bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e.scope/container/memory.events
Jan 21 08:54:14 np0005590528 podman[143911]: 2026-01-21 13:54:14.17304424 +0000 UTC m=+0.753627382 container died 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Jan 21 08:54:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:17 np0005590528 systemd[1]: var-lib-containers-storage-overlay-388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98-merged.mount: Deactivated successfully.
Jan 21 08:54:17 np0005590528 podman[143911]: 2026-01-21 13:54:17.05119128 +0000 UTC m=+3.631774412 container remove 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 08:54:17 np0005590528 systemd[1]: libpod-conmon-32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e.scope: Deactivated successfully.
Jan 21 08:54:17 np0005590528 podman[143693]: 2026-01-21 13:54:17.077692831 +0000 UTC m=+5.439354840 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 08:54:17 np0005590528 podman[144071]: 2026-01-21 13:54:17.215767366 +0000 UTC m=+0.044933910 container create 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 08:54:17 np0005590528 podman[144071]: 2026-01-21 13:54:17.19197454 +0000 UTC m=+0.021141104 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 08:54:17 np0005590528 python3[143681]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 08:54:17 np0005590528 podman[144169]: 2026-01-21 13:54:17.471737316 +0000 UTC m=+0.041138610 container create 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 21 08:54:17 np0005590528 systemd[1]: Started libpod-conmon-047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e.scope.
Jan 21 08:54:17 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:54:17 np0005590528 podman[144169]: 2026-01-21 13:54:17.545123513 +0000 UTC m=+0.114524817 container init 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:54:17 np0005590528 podman[144169]: 2026-01-21 13:54:17.451587377 +0000 UTC m=+0.020988701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:54:17 np0005590528 podman[144169]: 2026-01-21 13:54:17.553587024 +0000 UTC m=+0.122988318 container start 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 21 08:54:17 np0005590528 podman[144169]: 2026-01-21 13:54:17.556672687 +0000 UTC m=+0.126073981 container attach 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:54:17 np0005590528 wonderful_wilson[144214]: 167 167
Jan 21 08:54:17 np0005590528 podman[144169]: 2026-01-21 13:54:17.559864253 +0000 UTC m=+0.129265537 container died 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 08:54:17 np0005590528 systemd[1]: libpod-047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e.scope: Deactivated successfully.
Jan 21 08:54:17 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e5322174228702c682b6693ce7c653722ab0c22711d043f19e9a61363941242f-merged.mount: Deactivated successfully.
Jan 21 08:54:17 np0005590528 podman[144169]: 2026-01-21 13:54:17.603681796 +0000 UTC m=+0.173083090 container remove 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:54:17 np0005590528 systemd[1]: libpod-conmon-047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e.scope: Deactivated successfully.
Jan 21 08:54:17 np0005590528 podman[144318]: 2026-01-21 13:54:17.759489143 +0000 UTC m=+0.041515049 container create 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:54:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:17 np0005590528 systemd[1]: Started libpod-conmon-64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c.scope.
Jan 21 08:54:17 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:54:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b76cbf083bba3e1d3941744e18de2bf9e2ebf9854abc6552eb9fe3a31631ecb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b76cbf083bba3e1d3941744e18de2bf9e2ebf9854abc6552eb9fe3a31631ecb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b76cbf083bba3e1d3941744e18de2bf9e2ebf9854abc6552eb9fe3a31631ecb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b76cbf083bba3e1d3941744e18de2bf9e2ebf9854abc6552eb9fe3a31631ecb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:17 np0005590528 podman[144318]: 2026-01-21 13:54:17.740215604 +0000 UTC m=+0.022241530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:54:17 np0005590528 python3.9[144348]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:54:18 np0005590528 podman[144318]: 2026-01-21 13:54:18.030481881 +0000 UTC m=+0.312507797 container init 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:54:18 np0005590528 podman[144318]: 2026-01-21 13:54:18.037657701 +0000 UTC m=+0.319683607 container start 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:54:18 np0005590528 podman[144318]: 2026-01-21 13:54:18.229940386 +0000 UTC m=+0.511966292 container attach 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 08:54:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]: {
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:    "0": [
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:        {
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "devices": [
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "/dev/loop3"
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            ],
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_name": "ceph_lv0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_size": "21470642176",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "name": "ceph_lv0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "tags": {
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.cluster_name": "ceph",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.crush_device_class": "",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.encrypted": "0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.objectstore": "bluestore",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.osd_id": "0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.type": "block",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.vdo": "0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.with_tpm": "0"
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            },
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "type": "block",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "vg_name": "ceph_vg0"
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:        }
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:    ],
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:    "1": [
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:        {
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "devices": [
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "/dev/loop4"
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            ],
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_name": "ceph_lv1",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_size": "21470642176",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "name": "ceph_lv1",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "tags": {
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.cluster_name": "ceph",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.crush_device_class": "",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.encrypted": "0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.objectstore": "bluestore",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.osd_id": "1",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.type": "block",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.vdo": "0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.with_tpm": "0"
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            },
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "type": "block",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "vg_name": "ceph_vg1"
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:        }
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:    ],
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:    "2": [
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:        {
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "devices": [
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "/dev/loop5"
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            ],
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_name": "ceph_lv2",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_size": "21470642176",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "name": "ceph_lv2",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "tags": {
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.cluster_name": "ceph",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.crush_device_class": "",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.encrypted": "0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.objectstore": "bluestore",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.osd_id": "2",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.type": "block",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.vdo": "0",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:                "ceph.with_tpm": "0"
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            },
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "type": "block",
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:            "vg_name": "ceph_vg2"
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:        }
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]:    ]
Jan 21 08:54:18 np0005590528 affectionate_merkle[144352]: }
Jan 21 08:54:18 np0005590528 systemd[1]: libpod-64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c.scope: Deactivated successfully.
Jan 21 08:54:18 np0005590528 podman[144318]: 2026-01-21 13:54:18.375249584 +0000 UTC m=+0.657275510 container died 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 08:54:18 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6b76cbf083bba3e1d3941744e18de2bf9e2ebf9854abc6552eb9fe3a31631ecb-merged.mount: Deactivated successfully.
Jan 21 08:54:18 np0005590528 podman[144318]: 2026-01-21 13:54:18.463964344 +0000 UTC m=+0.745990260 container remove 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 08:54:18 np0005590528 systemd[1]: libpod-conmon-64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c.scope: Deactivated successfully.
Jan 21 08:54:18 np0005590528 python3.9[144528]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:54:18 np0005590528 podman[144632]: 2026-01-21 13:54:18.888543706 +0000 UTC m=+0.041781884 container create b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:54:18 np0005590528 systemd[1]: Started libpod-conmon-b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059.scope.
Jan 21 08:54:18 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:54:18 np0005590528 podman[144632]: 2026-01-21 13:54:18.965911127 +0000 UTC m=+0.119149325 container init b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:54:18 np0005590528 podman[144632]: 2026-01-21 13:54:18.871957492 +0000 UTC m=+0.025195700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:54:18 np0005590528 podman[144632]: 2026-01-21 13:54:18.97360089 +0000 UTC m=+0.126839068 container start b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 08:54:18 np0005590528 podman[144632]: 2026-01-21 13:54:18.977342499 +0000 UTC m=+0.130580697 container attach b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 08:54:18 np0005590528 bold_zhukovsky[144680]: 167 167
Jan 21 08:54:18 np0005590528 systemd[1]: libpod-b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059.scope: Deactivated successfully.
Jan 21 08:54:18 np0005590528 podman[144632]: 2026-01-21 13:54:18.979361147 +0000 UTC m=+0.132599345 container died b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:54:19 np0005590528 systemd[1]: var-lib-containers-storage-overlay-886cbbc55f812f8bbc6465ac736ede54daa27b17ee7df377c7d33f3bad261145-merged.mount: Deactivated successfully.
Jan 21 08:54:19 np0005590528 podman[144632]: 2026-01-21 13:54:19.028400614 +0000 UTC m=+0.181638792 container remove b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 08:54:19 np0005590528 systemd[1]: libpod-conmon-b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059.scope: Deactivated successfully.
Jan 21 08:54:19 np0005590528 python3.9[144682]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:54:19 np0005590528 podman[144704]: 2026-01-21 13:54:19.167966845 +0000 UTC m=+0.042837251 container create 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 08:54:19 np0005590528 systemd[1]: Started libpod-conmon-30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac.scope.
Jan 21 08:54:19 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:54:19 np0005590528 podman[144704]: 2026-01-21 13:54:19.144993718 +0000 UTC m=+0.019864144 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:54:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbb8404053954383bbc7c1a29343dc7ac6b9135c28f6c8b1f5a06a205059d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbb8404053954383bbc7c1a29343dc7ac6b9135c28f6c8b1f5a06a205059d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbb8404053954383bbc7c1a29343dc7ac6b9135c28f6c8b1f5a06a205059d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbb8404053954383bbc7c1a29343dc7ac6b9135c28f6c8b1f5a06a205059d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:19 np0005590528 podman[144704]: 2026-01-21 13:54:19.273441924 +0000 UTC m=+0.148312370 container init 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:54:19 np0005590528 podman[144704]: 2026-01-21 13:54:19.280640215 +0000 UTC m=+0.155510621 container start 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 08:54:19 np0005590528 podman[144704]: 2026-01-21 13:54:19.286397783 +0000 UTC m=+0.161268189 container attach 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:54:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:19 np0005590528 python3.9[144895]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003659.2097723-574-115651427355373/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:54:19 np0005590528 lvm[144972]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:54:19 np0005590528 lvm[144971]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:54:19 np0005590528 lvm[144971]: VG ceph_vg0 finished
Jan 21 08:54:19 np0005590528 lvm[144972]: VG ceph_vg1 finished
Jan 21 08:54:20 np0005590528 lvm[144976]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:54:20 np0005590528 lvm[144976]: VG ceph_vg2 finished
Jan 21 08:54:20 np0005590528 angry_saha[144743]: {}
Jan 21 08:54:20 np0005590528 systemd[1]: libpod-30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac.scope: Deactivated successfully.
Jan 21 08:54:20 np0005590528 systemd[1]: libpod-30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac.scope: Consumed 1.378s CPU time.
Jan 21 08:54:20 np0005590528 podman[144704]: 2026-01-21 13:54:20.127646268 +0000 UTC m=+1.002516674 container died 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:54:20 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c2bbb8404053954383bbc7c1a29343dc7ac6b9135c28f6c8b1f5a06a205059d1-merged.mount: Deactivated successfully.
Jan 21 08:54:20 np0005590528 podman[144704]: 2026-01-21 13:54:20.193318051 +0000 UTC m=+1.068188457 container remove 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:54:20 np0005590528 systemd[1]: libpod-conmon-30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac.scope: Deactivated successfully.
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.285040) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660285072, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 635, "num_deletes": 251, "total_data_size": 728583, "memory_usage": 740328, "flush_reason": "Manual Compaction"}
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660292069, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 721962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9076, "largest_seqno": 9710, "table_properties": {"data_size": 718652, "index_size": 1218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7388, "raw_average_key_size": 18, "raw_value_size": 711950, "raw_average_value_size": 1771, "num_data_blocks": 58, "num_entries": 402, "num_filter_entries": 402, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003609, "oldest_key_time": 1769003609, "file_creation_time": 1769003660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 7073 microseconds, and 3188 cpu microseconds.
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.292112) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 721962 bytes OK
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.292129) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.295698) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.295722) EVENT_LOG_v1 {"time_micros": 1769003660295717, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.295739) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 725183, prev total WAL file size 765631, number of live WAL files 2.
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.296281) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(705KB)], [23(7082KB)]
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660296383, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7974065, "oldest_snapshot_seqno": -1}
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:54:20 np0005590528 python3.9[145043]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 08:54:20 np0005590528 systemd[1]: Reloading.
Jan 21 08:54:20 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:54:20 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3305 keys, 6170317 bytes, temperature: kUnknown
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660590818, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6170317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6146527, "index_size": 14401, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80064, "raw_average_key_size": 24, "raw_value_size": 6085100, "raw_average_value_size": 1841, "num_data_blocks": 627, "num_entries": 3305, "num_filter_entries": 3305, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769003660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.592293) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6170317 bytes
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.595012) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.0 rd, 20.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 6.9 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(19.6) write-amplify(8.5) OK, records in: 3818, records dropped: 513 output_compression: NoCompression
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.595038) EVENT_LOG_v1 {"time_micros": 1769003660595024, "job": 8, "event": "compaction_finished", "compaction_time_micros": 295633, "compaction_time_cpu_micros": 18448, "output_level": 6, "num_output_files": 1, "total_output_size": 6170317, "num_input_records": 3818, "num_output_records": 3305, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660595326, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660596475, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.296115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.596568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.596574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.596576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.596577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:54:20 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.596579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:54:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 08:54:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2121 writes, 9581 keys, 2121 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2121 writes, 2121 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2121 writes, 9581 keys, 2121 commit groups, 1.0 writes per commit group, ingest: 12.52 MB, 0.02 MB/s#012Interval WAL: 2121 writes, 2121 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     89.5      0.11              0.04         4    0.027       0      0       0.0       0.0#012  L6      1/0    5.88 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1     40.2     34.0      0.60              0.07         3    0.198     10K   1243       0.0       0.0#012 Sum      1/0    5.88 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     34.0     42.4      0.70              0.11         7    0.100     10K   1243       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     34.3     42.6      0.70              0.11         6    0.116     10K   1243       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     40.2     34.0      0.60              0.07         3    0.198     10K   1243       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     93.3      0.10              0.04         3    0.034       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.04 MB/s read, 0.7 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.04 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562240bf58d0#2 capacity: 308.00 MB usage: 960.53 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(54,841.66 KB,0.26686%) FilterBlock(8,38.05 KB,0.0120634%) IndexBlock(8,80.83 KB,0.0256278%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 21 08:54:21 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:54:21 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:54:21 np0005590528 python3.9[145181]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:54:21 np0005590528 systemd[1]: Reloading.
Jan 21 08:54:21 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:54:21 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:54:21 np0005590528 systemd[1]: Starting ovn_controller container...
Jan 21 08:54:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:21 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:54:21 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74c60d16e88face7e778cd90f2ac050fe7c8c6342cddbdd87a33bfc68cf6d9ff/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 21 08:54:21 np0005590528 systemd[1]: Started /usr/bin/podman healthcheck run 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488.
Jan 21 08:54:21 np0005590528 podman[145221]: 2026-01-21 13:54:21.855323624 +0000 UTC m=+0.129714526 container init 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 21 08:54:21 np0005590528 ovn_controller[145237]: + sudo -E kolla_set_configs
Jan 21 08:54:21 np0005590528 podman[145221]: 2026-01-21 13:54:21.888625247 +0000 UTC m=+0.163016139 container start 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 08:54:21 np0005590528 edpm-start-podman-container[145221]: ovn_controller
Jan 21 08:54:21 np0005590528 systemd[1]: Created slice User Slice of UID 0.
Jan 21 08:54:21 np0005590528 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 21 08:54:21 np0005590528 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 21 08:54:21 np0005590528 podman[145244]: 2026-01-21 13:54:21.96650198 +0000 UTC m=+0.068692405 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 21 08:54:21 np0005590528 systemd[1]: Starting User Manager for UID 0...
Jan 21 08:54:21 np0005590528 systemd[1]: 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488-3c5d5bb8632e499f.service: Main process exited, code=exited, status=1/FAILURE
Jan 21 08:54:21 np0005590528 systemd[1]: 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488-3c5d5bb8632e499f.service: Failed with result 'exit-code'.
Jan 21 08:54:21 np0005590528 edpm-start-podman-container[145220]: Creating additional drop-in dependency for "ovn_controller" (65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488)
Jan 21 08:54:22 np0005590528 systemd[1]: Reloading.
Jan 21 08:54:22 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:54:22 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:54:22 np0005590528 systemd[145276]: Queued start job for default target Main User Target.
Jan 21 08:54:22 np0005590528 systemd[145276]: Created slice User Application Slice.
Jan 21 08:54:22 np0005590528 systemd[145276]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 21 08:54:22 np0005590528 systemd[145276]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 08:54:22 np0005590528 systemd[145276]: Reached target Paths.
Jan 21 08:54:22 np0005590528 systemd[145276]: Reached target Timers.
Jan 21 08:54:22 np0005590528 systemd[145276]: Starting D-Bus User Message Bus Socket...
Jan 21 08:54:22 np0005590528 systemd[145276]: Starting Create User's Volatile Files and Directories...
Jan 21 08:54:22 np0005590528 systemd[145276]: Listening on D-Bus User Message Bus Socket.
Jan 21 08:54:22 np0005590528 systemd[145276]: Reached target Sockets.
Jan 21 08:54:22 np0005590528 systemd[145276]: Finished Create User's Volatile Files and Directories.
Jan 21 08:54:22 np0005590528 systemd[145276]: Reached target Basic System.
Jan 21 08:54:22 np0005590528 systemd[145276]: Reached target Main User Target.
Jan 21 08:54:22 np0005590528 systemd[145276]: Startup finished in 163ms.
Jan 21 08:54:22 np0005590528 systemd[1]: Started User Manager for UID 0.
Jan 21 08:54:22 np0005590528 systemd[1]: Started ovn_controller container.
Jan 21 08:54:22 np0005590528 systemd[1]: Started Session c1 of User root.
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: INFO:__main__:Validating config file
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: INFO:__main__:Writing out command to execute
Jan 21 08:54:22 np0005590528 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: ++ cat /run_command
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: + ARGS=
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: + sudo kolla_copy_cacerts
Jan 21 08:54:22 np0005590528 systemd[1]: Started Session c2 of User root.
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: + [[ ! -n '' ]]
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: + . kolla_extend_start
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: + umask 0022
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 21 08:54:22 np0005590528 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 21 08:54:22 np0005590528 NetworkManager[48860]: <info>  [1769003662.4231] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 21 08:54:22 np0005590528 NetworkManager[48860]: <info>  [1769003662.4243] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 08:54:22 np0005590528 NetworkManager[48860]: <warn>  [1769003662.4247] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 08:54:22 np0005590528 NetworkManager[48860]: <info>  [1769003662.4256] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 21 08:54:22 np0005590528 NetworkManager[48860]: <info>  [1769003662.4263] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 21 08:54:22 np0005590528 NetworkManager[48860]: <info>  [1769003662.4268] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 21 08:54:22 np0005590528 kernel: br-int: entered promiscuous mode
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 21 08:54:22 np0005590528 systemd-udevd[144969]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 08:54:22 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:22Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 08:54:22 np0005590528 NetworkManager[48860]: <info>  [1769003662.4500] manager: (ovn-2384ff-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 21 08:54:22 np0005590528 kernel: genev_sys_6081: entered promiscuous mode
Jan 21 08:54:22 np0005590528 NetworkManager[48860]: <info>  [1769003662.4647] device (genev_sys_6081): carrier: link connected
Jan 21 08:54:22 np0005590528 NetworkManager[48860]: <info>  [1769003662.4654] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 21 08:54:23 np0005590528 python3.9[145497]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 21 08:54:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:25 np0005590528 python3.9[145671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:26 np0005590528 python3.9[145795]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003663.6576583-619-126984359837032/.source.yaml _original_basename=.1ywj3tdd follow=False checksum=89182053cc7d7956eb47291ba854186cb3c9f799 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:54:26 np0005590528 python3.9[145947]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:54:26 np0005590528 ovs-vsctl[145948]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 21 08:54:27 np0005590528 python3.9[146100]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:54:27 np0005590528 ovs-vsctl[146102]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 21 08:54:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:28 np0005590528 python3.9[146255]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:54:28 np0005590528 ovs-vsctl[146256]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 21 08:54:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:28 np0005590528 systemd-logind[780]: Session 46 logged out. Waiting for processes to exit.
Jan 21 08:54:28 np0005590528 systemd[1]: session-46.scope: Deactivated successfully.
Jan 21 08:54:28 np0005590528 systemd[1]: session-46.scope: Consumed 1min 423ms CPU time.
Jan 21 08:54:28 np0005590528 systemd-logind[780]: Removed session 46.
Jan 21 08:54:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:32 np0005590528 systemd[1]: Stopping User Manager for UID 0...
Jan 21 08:54:32 np0005590528 systemd[145276]: Activating special unit Exit the Session...
Jan 21 08:54:32 np0005590528 systemd[145276]: Stopped target Main User Target.
Jan 21 08:54:32 np0005590528 systemd[145276]: Stopped target Basic System.
Jan 21 08:54:32 np0005590528 systemd[145276]: Stopped target Paths.
Jan 21 08:54:32 np0005590528 systemd[145276]: Stopped target Sockets.
Jan 21 08:54:32 np0005590528 systemd[145276]: Stopped target Timers.
Jan 21 08:54:32 np0005590528 systemd[145276]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 21 08:54:32 np0005590528 systemd[145276]: Closed D-Bus User Message Bus Socket.
Jan 21 08:54:32 np0005590528 systemd[145276]: Stopped Create User's Volatile Files and Directories.
Jan 21 08:54:32 np0005590528 systemd[145276]: Removed slice User Application Slice.
Jan 21 08:54:32 np0005590528 systemd[145276]: Reached target Shutdown.
Jan 21 08:54:32 np0005590528 systemd[145276]: Finished Exit the Session.
Jan 21 08:54:32 np0005590528 systemd[145276]: Reached target Exit the Session.
Jan 21 08:54:32 np0005590528 systemd[1]: user@0.service: Deactivated successfully.
Jan 21 08:54:32 np0005590528 systemd[1]: Stopped User Manager for UID 0.
Jan 21 08:54:32 np0005590528 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 21 08:54:32 np0005590528 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 21 08:54:32 np0005590528 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 21 08:54:32 np0005590528 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 21 08:54:32 np0005590528 systemd[1]: Removed slice User Slice of UID 0.
Jan 21 08:54:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:33 np0005590528 systemd-logind[780]: New session 48 of user zuul.
Jan 21 08:54:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:33 np0005590528 systemd[1]: Started Session 48 of User zuul.
Jan 21 08:54:34 np0005590528 python3.9[146435]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:54:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:36 np0005590528 python3.9[146591]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:36 np0005590528 python3.9[146743]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:37 np0005590528 python3.9[146895]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:38 np0005590528 python3.9[147047]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:38 np0005590528 python3.9[147199]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:54:39
Jan 21 08:54:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:54:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:54:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'vms', 'images', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.control']
Jan 21 08:54:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:54:39 np0005590528 python3.9[147349]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:54:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:40 np0005590528 python3.9[147501]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 21 08:54:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:54:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:54:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:54:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:54:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:54:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:54:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:42 np0005590528 python3.9[147652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:42 np0005590528 python3.9[147773]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003681.4734051-81-55308414562917/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:43 np0005590528 python3.9[147923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:44 np0005590528 python3.9[148044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003683.0481741-96-23262624264541/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:45 np0005590528 python3.9[148196]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:54:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:45 np0005590528 python3.9[148280]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:54:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:48 np0005590528 python3.9[148433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 08:54:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:49 np0005590528 python3.9[148586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:49 np0005590528 python3.9[148707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003688.5937135-133-152238801728705/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:50 np0005590528 python3.9[148857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:54:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:54:50 np0005590528 python3.9[148978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003689.7898202-133-252259948199097/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:52 np0005590528 python3.9[149128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:52 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:52Z|00025|memory|INFO|16128 kB peak resident set size after 29.9 seconds
Jan 21 08:54:52 np0005590528 ovn_controller[145237]: 2026-01-21T13:54:52Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 21 08:54:52 np0005590528 podman[149161]: 2026-01-21 13:54:52.367863136 +0000 UTC m=+0.088379004 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 21 08:54:52 np0005590528 python3.9[149273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003691.634257-177-13539636545492/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:53 np0005590528 python3.9[149423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:53 np0005590528 python3.9[149544]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003692.764324-177-33026031433280/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:54 np0005590528 python3.9[149694]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:54:55 np0005590528 python3.9[149848]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:55 np0005590528 python3.9[150000]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:56 np0005590528 python3.9[150078]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:56 np0005590528 python3.9[150230]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:57 np0005590528 python3.9[150308]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:54:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:58 np0005590528 python3.9[150460]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:54:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:54:58 np0005590528 python3.9[150612]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:54:59 np0005590528 python3.9[150690]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:54:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:54:59 np0005590528 python3.9[150842]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:55:00 np0005590528 python3.9[150920]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:01 np0005590528 python3.9[151072]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:55:01 np0005590528 systemd[1]: Reloading.
Jan 21 08:55:01 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:55:01 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:55:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:02 np0005590528 python3.9[151262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:55:02 np0005590528 python3.9[151340]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:03 np0005590528 python3.9[151492]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:55:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:03 np0005590528 python3.9[151570]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:04 np0005590528 python3.9[151722]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:55:04 np0005590528 systemd[1]: Reloading.
Jan 21 08:55:04 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:55:04 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:55:05 np0005590528 systemd[1]: Starting Create netns directory...
Jan 21 08:55:05 np0005590528 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 08:55:05 np0005590528 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 08:55:05 np0005590528 systemd[1]: Finished Create netns directory.
Jan 21 08:55:05 np0005590528 python3.9[151917]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:55:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:06 np0005590528 python3.9[152069]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:55:06 np0005590528 python3.9[152192]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003705.9923124-328-39536336304488/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:55:07 np0005590528 python3.9[152344]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:08 np0005590528 python3.9[152496]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:55:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:09 np0005590528 python3.9[152648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:55:09 np0005590528 python3.9[152771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003708.6451545-361-9907093984083/.source.json _original_basename=.qhm5h_2f follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:10 np0005590528 python3.9[152921]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:55:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:55:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:55:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:55:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:55:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:55:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:12 np0005590528 python3.9[153344]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 21 08:55:13 np0005590528 python3.9[153496]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 08:55:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:14 np0005590528 python3[153648]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 08:55:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:55:26 np0005590528 podman[153661]: 2026-01-21 13:55:26.376307945 +0000 UTC m=+11.879529899 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 08:55:26 np0005590528 podman[153823]: 2026-01-21 13:55:26.393884119 +0000 UTC m=+3.088000685 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 08:55:26 np0005590528 podman[153948]: 2026-01-21 13:55:26.568919628 +0000 UTC m=+0.076570211 container create 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 21 08:55:26 np0005590528 podman[153948]: 2026-01-21 13:55:26.533874113 +0000 UTC m=+0.041524756 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 08:55:26 np0005590528 python3[153648]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:55:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:55:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:55:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:55:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:55:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:55:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:55:27 np0005590528 python3.9[154219]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:55:27 np0005590528 podman[154234]: 2026-01-21 13:55:27.466390202 +0000 UTC m=+0.066633415 container create 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:55:27 np0005590528 systemd[1]: Started libpod-conmon-2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e.scope.
Jan 21 08:55:27 np0005590528 podman[154234]: 2026-01-21 13:55:27.42291216 +0000 UTC m=+0.023155393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:55:27 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:55:27 np0005590528 podman[154234]: 2026-01-21 13:55:27.557938521 +0000 UTC m=+0.158181814 container init 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 08:55:27 np0005590528 podman[154234]: 2026-01-21 13:55:27.570094821 +0000 UTC m=+0.170338044 container start 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:55:27 np0005590528 podman[154234]: 2026-01-21 13:55:27.574545571 +0000 UTC m=+0.174788804 container attach 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 08:55:27 np0005590528 keen_dubinsky[154274]: 167 167
Jan 21 08:55:27 np0005590528 systemd[1]: libpod-2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e.scope: Deactivated successfully.
Jan 21 08:55:27 np0005590528 podman[154234]: 2026-01-21 13:55:27.582037075 +0000 UTC m=+0.182280318 container died 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:55:27 np0005590528 systemd[1]: var-lib-containers-storage-overlay-a38e22bd6a4dd9a5250cad341bb180f7b30e406ea6f9264211e5af7d5e6c622b-merged.mount: Deactivated successfully.
Jan 21 08:55:27 np0005590528 podman[154234]: 2026-01-21 13:55:27.635198137 +0000 UTC m=+0.235441340 container remove 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:55:27 np0005590528 systemd[1]: libpod-conmon-2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e.scope: Deactivated successfully.
Jan 21 08:55:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 08:55:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5620 writes, 24K keys, 5620 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5620 writes, 886 syncs, 6.34 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5620 writes, 24K keys, 5620 commit groups, 1.0 writes per commit group, ingest: 18.77 MB, 0.03 MB/s#012Interval WAL: 5620 writes, 886 syncs, 6.34 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 21 08:55:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:27 np0005590528 podman[154371]: 2026-01-21 13:55:27.825992785 +0000 UTC m=+0.063361075 container create 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 08:55:27 np0005590528 systemd[1]: Started libpod-conmon-25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969.scope.
Jan 21 08:55:27 np0005590528 podman[154371]: 2026-01-21 13:55:27.800772552 +0000 UTC m=+0.038140862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:55:27 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:55:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:27 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:27 np0005590528 podman[154371]: 2026-01-21 13:55:27.91738983 +0000 UTC m=+0.154758140 container init 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:55:27 np0005590528 podman[154371]: 2026-01-21 13:55:27.928967146 +0000 UTC m=+0.166335426 container start 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 08:55:27 np0005590528 podman[154371]: 2026-01-21 13:55:27.932624246 +0000 UTC m=+0.169992526 container attach 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 08:55:28 np0005590528 python3.9[154444]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:28 np0005590528 xenodochial_babbage[154429]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:55:28 np0005590528 xenodochial_babbage[154429]: --> All data devices are unavailable
Jan 21 08:55:28 np0005590528 systemd[1]: libpod-25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969.scope: Deactivated successfully.
Jan 21 08:55:28 np0005590528 podman[154371]: 2026-01-21 13:55:28.435043862 +0000 UTC m=+0.672412162 container died 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 08:55:28 np0005590528 systemd[1]: var-lib-containers-storage-overlay-10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f-merged.mount: Deactivated successfully.
Jan 21 08:55:28 np0005590528 podman[154371]: 2026-01-21 13:55:28.485348074 +0000 UTC m=+0.722716354 container remove 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:55:28 np0005590528 systemd[1]: libpod-conmon-25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969.scope: Deactivated successfully.
Jan 21 08:55:28 np0005590528 python3.9[154533]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 08:55:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:28 np0005590528 podman[154688]: 2026-01-21 13:55:28.926697834 +0000 UTC m=+0.048342104 container create 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:55:28 np0005590528 systemd[1]: Started libpod-conmon-5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb.scope.
Jan 21 08:55:28 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:55:28 np0005590528 podman[154688]: 2026-01-21 13:55:28.902911047 +0000 UTC m=+0.024555317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:55:29 np0005590528 podman[154688]: 2026-01-21 13:55:29.011465205 +0000 UTC m=+0.133109475 container init 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 08:55:29 np0005590528 podman[154688]: 2026-01-21 13:55:29.020198291 +0000 UTC m=+0.141842531 container start 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 08:55:29 np0005590528 podman[154688]: 2026-01-21 13:55:29.025873051 +0000 UTC m=+0.147517301 container attach 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 08:55:29 np0005590528 busy_shaw[154740]: 167 167
Jan 21 08:55:29 np0005590528 systemd[1]: libpod-5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb.scope: Deactivated successfully.
Jan 21 08:55:29 np0005590528 podman[154688]: 2026-01-21 13:55:29.028964598 +0000 UTC m=+0.150608838 container died 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:55:29 np0005590528 systemd[1]: var-lib-containers-storage-overlay-168920f88ee9e3d4028a0b400d40ca882f98b8e8491e232038df3460b3f4e05c-merged.mount: Deactivated successfully.
Jan 21 08:55:29 np0005590528 podman[154688]: 2026-01-21 13:55:29.068710938 +0000 UTC m=+0.190355168 container remove 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:55:29 np0005590528 systemd[1]: libpod-conmon-5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb.scope: Deactivated successfully.
Jan 21 08:55:29 np0005590528 podman[154805]: 2026-01-21 13:55:29.258623234 +0000 UTC m=+0.064187355 container create 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 08:55:29 np0005590528 python3.9[154797]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003728.640321-439-5698257486068/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:29 np0005590528 systemd[1]: Started libpod-conmon-4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1.scope.
Jan 21 08:55:29 np0005590528 podman[154805]: 2026-01-21 13:55:29.223410296 +0000 UTC m=+0.028974467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:55:29 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:55:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c986bcc9a5ec58da271cc75cd4d50180a52b4352537afcee4c7eb63f5287e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c986bcc9a5ec58da271cc75cd4d50180a52b4352537afcee4c7eb63f5287e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c986bcc9a5ec58da271cc75cd4d50180a52b4352537afcee4c7eb63f5287e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c986bcc9a5ec58da271cc75cd4d50180a52b4352537afcee4c7eb63f5287e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:29 np0005590528 podman[154805]: 2026-01-21 13:55:29.348802579 +0000 UTC m=+0.154366710 container init 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:55:29 np0005590528 podman[154805]: 2026-01-21 13:55:29.35979572 +0000 UTC m=+0.165359831 container start 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:55:29 np0005590528 podman[154805]: 2026-01-21 13:55:29.364364203 +0000 UTC m=+0.169928334 container attach 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]: {
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:    "0": [
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:        {
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "devices": [
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "/dev/loop3"
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            ],
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_name": "ceph_lv0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_size": "21470642176",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "name": "ceph_lv0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "tags": {
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.cluster_name": "ceph",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.crush_device_class": "",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.encrypted": "0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.objectstore": "bluestore",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.osd_id": "0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.type": "block",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.vdo": "0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.with_tpm": "0"
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            },
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "type": "block",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "vg_name": "ceph_vg0"
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:        }
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:    ],
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:    "1": [
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:        {
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "devices": [
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "/dev/loop4"
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            ],
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_name": "ceph_lv1",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_size": "21470642176",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "name": "ceph_lv1",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "tags": {
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.cluster_name": "ceph",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.crush_device_class": "",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.encrypted": "0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.objectstore": "bluestore",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.osd_id": "1",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.type": "block",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.vdo": "0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.with_tpm": "0"
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            },
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "type": "block",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "vg_name": "ceph_vg1"
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:        }
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:    ],
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:    "2": [
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:        {
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "devices": [
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "/dev/loop5"
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            ],
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_name": "ceph_lv2",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_size": "21470642176",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "name": "ceph_lv2",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "tags": {
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.cluster_name": "ceph",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.crush_device_class": "",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.encrypted": "0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.objectstore": "bluestore",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.osd_id": "2",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.type": "block",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.vdo": "0",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:                "ceph.with_tpm": "0"
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            },
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "type": "block",
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:            "vg_name": "ceph_vg2"
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:        }
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]:    ]
Jan 21 08:55:29 np0005590528 epic_lumiere[154822]: }
Jan 21 08:55:29 np0005590528 systemd[1]: libpod-4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1.scope: Deactivated successfully.
Jan 21 08:55:29 np0005590528 podman[154805]: 2026-01-21 13:55:29.673170103 +0000 UTC m=+0.478734214 container died 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 08:55:29 np0005590528 python3.9[154902]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 08:55:29 np0005590528 systemd[1]: Reloading.
Jan 21 08:55:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:29 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:55:29 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:55:30 np0005590528 systemd[1]: var-lib-containers-storage-overlay-45c986bcc9a5ec58da271cc75cd4d50180a52b4352537afcee4c7eb63f5287e9-merged.mount: Deactivated successfully.
Jan 21 08:55:30 np0005590528 podman[154805]: 2026-01-21 13:55:30.521760831 +0000 UTC m=+1.327324942 container remove 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 08:55:30 np0005590528 systemd[1]: libpod-conmon-4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1.scope: Deactivated successfully.
Jan 21 08:55:30 np0005590528 python3.9[155033]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:55:30 np0005590528 systemd[1]: Reloading.
Jan 21 08:55:30 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:55:30 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:55:31 np0005590528 podman[155134]: 2026-01-21 13:55:30.991989514 +0000 UTC m=+0.024359593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:55:31 np0005590528 systemd[1]: Starting ovn_metadata_agent container...
Jan 21 08:55:31 np0005590528 podman[155134]: 2026-01-21 13:55:31.247305863 +0000 UTC m=+0.279675892 container create 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 08:55:31 np0005590528 systemd[1]: Started libpod-conmon-89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066.scope.
Jan 21 08:55:31 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:55:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35671b1c7d92c7660872932435139198769fd63a07748331c62a0e8a0175a667/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:31 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35671b1c7d92c7660872932435139198769fd63a07748331c62a0e8a0175a667/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:31 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:55:31 np0005590528 systemd[1]: Started /usr/bin/podman healthcheck run 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2.
Jan 21 08:55:31 np0005590528 podman[155134]: 2026-01-21 13:55:31.642830363 +0000 UTC m=+0.675200482 container init 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 21 08:55:31 np0005590528 podman[155134]: 2026-01-21 13:55:31.655373351 +0000 UTC m=+0.687743420 container start 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 08:55:31 np0005590528 podman[155152]: 2026-01-21 13:55:31.655863904 +0000 UTC m=+0.516011044 container init 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 21 08:55:31 np0005590528 podman[155134]: 2026-01-21 13:55:31.661230427 +0000 UTC m=+0.693600456 container attach 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:55:31 np0005590528 youthful_shtern[155172]: 167 167
Jan 21 08:55:31 np0005590528 podman[155134]: 2026-01-21 13:55:31.664758193 +0000 UTC m=+0.697128212 container died 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 08:55:31 np0005590528 systemd[1]: libpod-89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066.scope: Deactivated successfully.
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: + sudo -E kolla_set_configs
Jan 21 08:55:31 np0005590528 podman[155152]: 2026-01-21 13:55:31.688674753 +0000 UTC m=+0.548821873 container start 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 21 08:55:31 np0005590528 edpm-start-podman-container[155152]: ovn_metadata_agent
Jan 21 08:55:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:31 np0005590528 systemd[1]: var-lib-containers-storage-overlay-81e520e0451c41c1a198e3c64edff492c2ae559f679686d54018d14d40a92af7-merged.mount: Deactivated successfully.
Jan 21 08:55:31 np0005590528 podman[155134]: 2026-01-21 13:55:31.85226334 +0000 UTC m=+0.884633359 container remove 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 08:55:31 np0005590528 systemd[1]: libpod-conmon-89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066.scope: Deactivated successfully.
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Validating config file
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Copying service configuration files
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Writing out command to execute
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: ++ cat /run_command
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: + CMD=neutron-ovn-metadata-agent
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: + ARGS=
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: + sudo kolla_copy_cacerts
Jan 21 08:55:31 np0005590528 edpm-start-podman-container[155151]: Creating additional drop-in dependency for "ovn_metadata_agent" (9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2)
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: + [[ ! -n '' ]]
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: + . kolla_extend_start
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: + umask 0022
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: + exec neutron-ovn-metadata-agent
Jan 21 08:55:31 np0005590528 ovn_metadata_agent[155169]: Running command: 'neutron-ovn-metadata-agent'
Jan 21 08:55:31 np0005590528 systemd[1]: Reloading.
Jan 21 08:55:31 np0005590528 podman[155182]: 2026-01-21 13:55:31.92927506 +0000 UTC m=+0.225023214 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 08:55:32 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:55:32 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:55:32 np0005590528 podman[155262]: 2026-01-21 13:55:32.040138405 +0000 UTC m=+0.025984301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:55:32 np0005590528 systemd[1]: Started ovn_metadata_agent container.
Jan 21 08:55:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 08:55:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 6984 writes, 28K keys, 6984 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6984 writes, 1319 syncs, 5.29 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6984 writes, 28K keys, 6984 commit groups, 1.0 writes per commit group, ingest: 19.80 MB, 0.03 MB/s#012Interval WAL: 6984 writes, 1319 syncs, 5.29 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 21 08:55:32 np0005590528 podman[155262]: 2026-01-21 13:55:32.888194091 +0000 UTC m=+0.874039997 container create dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:55:33 np0005590528 python3.9[155443]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 21 08:55:33 np0005590528 systemd[1]: Started libpod-conmon-dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af.scope.
Jan 21 08:55:33 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:55:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee878c2796fad942ae8b6187552fc5c634b95cfd35aca0aae20f0a58b4cd5ce6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee878c2796fad942ae8b6187552fc5c634b95cfd35aca0aae20f0a58b4cd5ce6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee878c2796fad942ae8b6187552fc5c634b95cfd35aca0aae20f0a58b4cd5ce6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee878c2796fad942ae8b6187552fc5c634b95cfd35aca0aae20f0a58b4cd5ce6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:55:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:33 np0005590528 podman[155262]: 2026-01-21 13:55:33.707400814 +0000 UTC m=+1.693246700 container init dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 21 08:55:33 np0005590528 podman[155262]: 2026-01-21 13:55:33.715631117 +0000 UTC m=+1.701476993 container start dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 08:55:33 np0005590528 podman[155262]: 2026-01-21 13:55:33.794359729 +0000 UTC m=+1.780205615 container attach dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 08:55:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.835 155179 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.835 155179 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.836 155179 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.836 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.836 155179 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.836 155179 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.836 155179 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.883 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.884 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.884 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.884 155179 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.885 155179 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.899 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 3ade990a-d6f9-4724-a58c-009e4fc34364 (UUID: 3ade990a-d6f9-4724-a58c-009e4fc34364) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.917 155179 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.918 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.918 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.918 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.921 155179 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.927 155179 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.933 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '3ade990a-d6f9-4724-a58c-009e4fc34364'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd43120a7c0>], external_ids={}, name=3ade990a-d6f9-4724-a58c-009e4fc34364, nb_cfg_timestamp=1769003670447, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.934 155179 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd43120ae50>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.935 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.935 155179 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.935 155179 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.935 155179 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.941 155179 DEBUG oslo_service.service [-] Started child 155613 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.945 155179 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpjyd3laql/privsep.sock']#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.945 155613 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-428749'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.966 155613 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.966 155613 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.967 155613 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.970 155613 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.976 155613 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 21 08:55:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.982 155613 INFO eventlet.wsgi.server [-] (155613) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 21 08:55:34 np0005590528 python3.9[155602]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:55:34 np0005590528 lvm[155807]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:55:34 np0005590528 lvm[155807]: VG ceph_vg1 finished
Jan 21 08:55:34 np0005590528 lvm[155806]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:55:34 np0005590528 lvm[155806]: VG ceph_vg0 finished
Jan 21 08:55:34 np0005590528 lvm[155809]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:55:34 np0005590528 lvm[155809]: VG ceph_vg2 finished
Jan 21 08:55:34 np0005590528 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 21 08:55:34 np0005590528 suspicious_ganguly[155493]: {}
Jan 21 08:55:34 np0005590528 systemd[1]: libpod-dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af.scope: Deactivated successfully.
Jan 21 08:55:34 np0005590528 systemd[1]: libpod-dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af.scope: Consumed 1.420s CPU time.
Jan 21 08:55:34 np0005590528 podman[155262]: 2026-01-21 13:55:34.593824416 +0000 UTC m=+2.579670322 container died dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:55:34 np0005590528 python3.9[155802]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003733.5649986-484-272982526493205/.source.yaml _original_basename=.5r64btgd follow=False checksum=10f7f895d301938e0fadf18a8ee2b485f6809c3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:34 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.676 155179 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 21 08:55:34 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.678 155179 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpjyd3laql/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 21 08:55:34 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.518 155811 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 21 08:55:34 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.522 155811 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 21 08:55:34 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.524 155811 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 21 08:55:34 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.524 155811 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155811#033[00m
Jan 21 08:55:34 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.682 155811 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac9e61f-c4a9-4cfa-904e-fa55e1aa94a3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 08:55:35 np0005590528 systemd[1]: session-48.scope: Deactivated successfully.
Jan 21 08:55:35 np0005590528 systemd[1]: session-48.scope: Consumed 57.852s CPU time.
Jan 21 08:55:35 np0005590528 systemd-logind[780]: Session 48 logged out. Waiting for processes to exit.
Jan 21 08:55:35 np0005590528 systemd-logind[780]: Removed session 48.
Jan 21 08:55:35 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ee878c2796fad942ae8b6187552fc5c634b95cfd35aca0aae20f0a58b4cd5ce6-merged.mount: Deactivated successfully.
Jan 21 08:55:35 np0005590528 podman[155262]: 2026-01-21 13:55:35.104300962 +0000 UTC m=+3.090146868 container remove dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:55:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:55:35 np0005590528 systemd[1]: libpod-conmon-dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af.scope: Deactivated successfully.
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.216 155811 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.216 155811 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.216 155811 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 08:55:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:55:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:55:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:55:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:55:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.804 155811 DEBUG oslo.privsep.daemon [-] privsep: reply[225b7260-b053-4e30-8fe3-3d4c63bb79df]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.807 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, column=external_ids, values=({'neutron:ovn-metadata-id': '817f77ed-8014-5ed7-bdf9-4f7a33d6b36b'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 08:55:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.868 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.881 155179 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.881 155179 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.881 155179 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.882 155179 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.882 155179 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.882 155179 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.882 155179 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.882 155179 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.885 155179 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.885 155179 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.885 155179 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.885 155179 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.885 155179 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.886 155179 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.886 155179 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.886 155179 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.886 155179 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.887 155179 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.887 155179 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.887 155179 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.887 155179 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.888 155179 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.888 155179 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.888 155179 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.888 155179 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.888 155179 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.895 155179 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.895 155179 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.895 155179 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.895 155179 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.896 155179 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.896 155179 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.896 155179 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.896 155179 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.902 155179 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.902 155179 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.902 155179 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.902 155179 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.904 155179 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.904 155179 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.904 155179 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.904 155179 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.904 155179 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.906 155179 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.906 155179 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.906 155179 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.906 155179 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.906 155179 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.907 155179 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.907 155179 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.907 155179 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.907 155179 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.907 155179 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.908 155179 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.908 155179 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.908 155179 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.908 155179 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.908 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.909 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.909 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.909 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.909 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.909 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.912 155179 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.912 155179 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.912 155179 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.912 155179 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.912 155179 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.914 155179 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.914 155179 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.914 155179 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.914 155179 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.914 155179 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.915 155179 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.915 155179 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.915 155179 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.915 155179 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.915 155179 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.916 155179 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.916 155179 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.916 155179 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.916 155179 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.916 155179 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.917 155179 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.917 155179 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.917 155179 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.917 155179 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.917 155179 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.919 155179 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.919 155179 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.919 155179 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.919 155179 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.919 155179 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.923 155179 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.923 155179 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.923 155179 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.923 155179 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.923 155179 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.930 155179 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.930 155179 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.930 155179 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.930 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.930 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.936 155179 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.936 155179 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.936 155179 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 08:55:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.936 155179 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 21 08:55:37 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 08:55:37 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5492 writes, 23K keys, 5492 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5492 writes, 812 syncs, 6.76 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5492 writes, 23K keys, 5492 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s#012Interval WAL: 5492 writes, 812 syncs, 6.76 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Jan 21 08:55:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:55:39
Jan 21 08:55:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:55:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:55:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr', 'default.rgw.meta', 'images', 'default.rgw.control', 'backups']
Jan 21 08:55:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:55:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:40 np0005590528 ceph-mgr[75322]: [devicehealth INFO root] Check health
Jan 21 08:55:40 np0005590528 systemd-logind[780]: New session 49 of user zuul.
Jan 21 08:55:40 np0005590528 systemd[1]: Started Session 49 of User zuul.
Jan 21 08:55:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:55:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:55:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:55:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:55:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:55:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:55:41 np0005590528 python3.9[156031]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:55:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:42 np0005590528 python3.9[156187]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:55:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:44 np0005590528 python3.9[156353]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 08:55:44 np0005590528 systemd[1]: Reloading.
Jan 21 08:55:44 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:55:44 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:55:45 np0005590528 python3.9[156539]: ansible-ansible.builtin.service_facts Invoked
Jan 21 08:55:45 np0005590528 network[156556]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 08:55:45 np0005590528 network[156557]: 'network-scripts' will be removed from distribution in near future.
Jan 21 08:55:45 np0005590528 network[156558]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 08:55:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:55:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:55:50 np0005590528 python3.9[156820]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:55:51 np0005590528 python3.9[156973]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:55:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:53 np0005590528 python3.9[157126]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:55:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:54 np0005590528 python3.9[157279]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:55:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:55 np0005590528 python3.9[157432]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:55:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:55 np0005590528 python3.9[157585]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:55:56 np0005590528 podman[157738]: 2026-01-21 13:55:56.584967681 +0000 UTC m=+0.109044282 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:55:56 np0005590528 python3.9[157739]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 08:55:57 np0005590528 python3.9[157917]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:55:58 np0005590528 python3.9[158069]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:59 np0005590528 python3.9[158221]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:55:59 np0005590528 python3.9[158373]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:55:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:00 np0005590528 python3.9[158525]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:56:01 np0005590528 python3.9[158677]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:56:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:01 np0005590528 python3.9[158829]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:56:02 np0005590528 podman[158929]: 2026-01-21 13:56:02.367375386 +0000 UTC m=+0.092663757 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 08:56:02 np0005590528 python3.9[159000]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:56:03 np0005590528 python3.9[159152]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:56:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:03 np0005590528 python3.9[159304]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:56:04 np0005590528 python3.9[159456]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:56:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:05 np0005590528 python3.9[159608]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:56:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:05 np0005590528 python3.9[159760]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:56:06 np0005590528 python3.9[159912]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:56:07 np0005590528 python3.9[160064]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:56:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:08 np0005590528 python3.9[160216]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 08:56:09 np0005590528 python3.9[160368]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 08:56:09 np0005590528 systemd[1]: Reloading.
Jan 21 08:56:09 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:56:09 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:56:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:10 np0005590528 python3.9[160555]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:56:10 np0005590528 python3.9[160708]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:56:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:56:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:56:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:56:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:56:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:56:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:56:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:11 np0005590528 python3.9[160861]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:56:12 np0005590528 python3.9[161014]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:56:13 np0005590528 python3.9[161167]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:56:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:13 np0005590528 python3.9[161320]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:56:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:14 np0005590528 python3.9[161473]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:56:15 np0005590528 python3.9[161626]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 21 08:56:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:16 np0005590528 python3.9[161779]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 08:56:17 np0005590528 python3.9[161937]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 08:56:17 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 08:56:17 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 08:56:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:18 np0005590528 python3.9[162098]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 08:56:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:19 np0005590528 python3.9[162182]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 08:56:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:27 np0005590528 podman[162195]: 2026-01-21 13:56:27.389136084 +0000 UTC m=+0.108877763 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 21 08:56:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:33 np0005590528 podman[162219]: 2026-01-21 13:56:33.326257915 +0000 UTC m=+0.053013238 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 21 08:56:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 21 08:56:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:56:33.886 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 08:56:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:56:33.887 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 08:56:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:56:33.887 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 08:56:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 08:56:35 np0005590528 podman[162363]: 2026-01-21 13:56:35.9414882 +0000 UTC m=+0.117413689 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 08:56:36 np0005590528 podman[162363]: 2026-01-21 13:56:36.062960025 +0000 UTC m=+0.238885524 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 08:56:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:56:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:56:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:56:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:56:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:56:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 08:56:38 np0005590528 podman[162748]: 2026-01-21 13:56:38.124284448 +0000 UTC m=+0.058573431 container create 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 21 08:56:38 np0005590528 systemd[1]: Started libpod-conmon-9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d.scope.
Jan 21 08:56:38 np0005590528 podman[162748]: 2026-01-21 13:56:38.092732958 +0000 UTC m=+0.027022041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:56:38 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:56:38 np0005590528 podman[162748]: 2026-01-21 13:56:38.22088794 +0000 UTC m=+0.155176943 container init 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 08:56:38 np0005590528 podman[162748]: 2026-01-21 13:56:38.230704115 +0000 UTC m=+0.164993098 container start 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:56:38 np0005590528 podman[162748]: 2026-01-21 13:56:38.234176243 +0000 UTC m=+0.168465246 container attach 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 21 08:56:38 np0005590528 wonderful_moore[162767]: 167 167
Jan 21 08:56:38 np0005590528 systemd[1]: libpod-9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d.scope: Deactivated successfully.
Jan 21 08:56:38 np0005590528 conmon[162767]: conmon 9260d4494541e4fcf093 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d.scope/container/memory.events
Jan 21 08:56:38 np0005590528 podman[162748]: 2026-01-21 13:56:38.239682494 +0000 UTC m=+0.173971497 container died 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:56:38 np0005590528 systemd[1]: var-lib-containers-storage-overlay-15d4a27de7b7de94f054afd36c35823353887a0432260cbe8cf96577386c4488-merged.mount: Deactivated successfully.
Jan 21 08:56:38 np0005590528 podman[162748]: 2026-01-21 13:56:38.281894566 +0000 UTC m=+0.216183549 container remove 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:56:38 np0005590528 systemd[1]: libpod-conmon-9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d.scope: Deactivated successfully.
Jan 21 08:56:38 np0005590528 podman[162799]: 2026-01-21 13:56:38.476702369 +0000 UTC m=+0.069504221 container create 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:56:38 np0005590528 systemd[1]: Started libpod-conmon-312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595.scope.
Jan 21 08:56:38 np0005590528 podman[162799]: 2026-01-21 13:56:38.438296206 +0000 UTC m=+0.031098158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:56:38 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 08:56:38 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:56:38 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:56:38 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:56:38 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:56:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:38 np0005590528 podman[162799]: 2026-01-21 13:56:38.581126134 +0000 UTC m=+0.173928016 container init 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:56:38 np0005590528 podman[162799]: 2026-01-21 13:56:38.59708137 +0000 UTC m=+0.189883222 container start 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:56:38 np0005590528 podman[162799]: 2026-01-21 13:56:38.600873058 +0000 UTC m=+0.193675050 container attach 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 08:56:39 np0005590528 reverent_gates[162821]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:56:39 np0005590528 reverent_gates[162821]: --> All data devices are unavailable
Jan 21 08:56:39 np0005590528 systemd[1]: libpod-312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595.scope: Deactivated successfully.
Jan 21 08:56:39 np0005590528 podman[162799]: 2026-01-21 13:56:39.128241594 +0000 UTC m=+0.721043456 container died 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 08:56:39 np0005590528 systemd[1]: var-lib-containers-storage-overlay-eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86-merged.mount: Deactivated successfully.
Jan 21 08:56:39 np0005590528 podman[162799]: 2026-01-21 13:56:39.17958188 +0000 UTC m=+0.772383732 container remove 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 08:56:39 np0005590528 systemd[1]: libpod-conmon-312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595.scope: Deactivated successfully.
Jan 21 08:56:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:56:39
Jan 21 08:56:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:56:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:56:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta']
Jan 21 08:56:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:56:39 np0005590528 podman[162948]: 2026-01-21 13:56:39.695030117 +0000 UTC m=+0.082799934 container create 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 08:56:39 np0005590528 systemd[1]: Started libpod-conmon-1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3.scope.
Jan 21 08:56:39 np0005590528 podman[162948]: 2026-01-21 13:56:39.650990539 +0000 UTC m=+0.038760306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:56:39 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:56:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 08:56:39 np0005590528 podman[162948]: 2026-01-21 13:56:39.906184439 +0000 UTC m=+0.293954206 container init 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:56:39 np0005590528 podman[162948]: 2026-01-21 13:56:39.917890912 +0000 UTC m=+0.305660649 container start 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 08:56:39 np0005590528 podman[162948]: 2026-01-21 13:56:39.922504346 +0000 UTC m=+0.310274073 container attach 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 08:56:39 np0005590528 sharp_heisenberg[162967]: 167 167
Jan 21 08:56:39 np0005590528 systemd[1]: libpod-1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3.scope: Deactivated successfully.
Jan 21 08:56:39 np0005590528 podman[162948]: 2026-01-21 13:56:39.926634154 +0000 UTC m=+0.314403881 container died 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:56:39 np0005590528 systemd[1]: var-lib-containers-storage-overlay-94aa4b32ee3c46175c93f4bb7a7295aa1e493d6713fbe6bd0a111dad168a1a37-merged.mount: Deactivated successfully.
Jan 21 08:56:39 np0005590528 podman[162948]: 2026-01-21 13:56:39.982923073 +0000 UTC m=+0.370692800 container remove 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:56:40 np0005590528 systemd[1]: libpod-conmon-1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3.scope: Deactivated successfully.
Jan 21 08:56:40 np0005590528 podman[163000]: 2026-01-21 13:56:40.201706461 +0000 UTC m=+0.053305487 container create 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 08:56:40 np0005590528 systemd[1]: Started libpod-conmon-91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819.scope.
Jan 21 08:56:40 np0005590528 podman[163000]: 2026-01-21 13:56:40.176675444 +0000 UTC m=+0.028274550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:56:40 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:56:40 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc66562db557b3e3aaba1201735eb9f3254db76c326117a041288836cb4002e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:40 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc66562db557b3e3aaba1201735eb9f3254db76c326117a041288836cb4002e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:40 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc66562db557b3e3aaba1201735eb9f3254db76c326117a041288836cb4002e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:40 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc66562db557b3e3aaba1201735eb9f3254db76c326117a041288836cb4002e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:40 np0005590528 podman[163000]: 2026-01-21 13:56:40.312126063 +0000 UTC m=+0.163725099 container init 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 08:56:40 np0005590528 podman[163000]: 2026-01-21 13:56:40.326998945 +0000 UTC m=+0.178597951 container start 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:56:40 np0005590528 podman[163000]: 2026-01-21 13:56:40.330671689 +0000 UTC m=+0.182270695 container attach 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:56:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]: {
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:    "0": [
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:        {
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "devices": [
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "/dev/loop3"
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            ],
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_name": "ceph_lv0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_size": "21470642176",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "name": "ceph_lv0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "tags": {
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.cluster_name": "ceph",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.crush_device_class": "",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.encrypted": "0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.objectstore": "bluestore",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.osd_id": "0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.type": "block",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.vdo": "0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.with_tpm": "0"
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            },
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "type": "block",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "vg_name": "ceph_vg0"
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:        }
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:    ],
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:    "1": [
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:        {
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "devices": [
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "/dev/loop4"
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            ],
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_name": "ceph_lv1",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_size": "21470642176",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "name": "ceph_lv1",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "tags": {
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.cluster_name": "ceph",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.crush_device_class": "",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.encrypted": "0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.objectstore": "bluestore",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.osd_id": "1",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.type": "block",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.vdo": "0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.with_tpm": "0"
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            },
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "type": "block",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "vg_name": "ceph_vg1"
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:        }
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:    ],
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:    "2": [
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:        {
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "devices": [
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "/dev/loop5"
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            ],
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_name": "ceph_lv2",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_size": "21470642176",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "name": "ceph_lv2",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "tags": {
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.cluster_name": "ceph",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.crush_device_class": "",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.encrypted": "0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.objectstore": "bluestore",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.osd_id": "2",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.type": "block",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.vdo": "0",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:                "ceph.with_tpm": "0"
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            },
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "type": "block",
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:            "vg_name": "ceph_vg2"
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:        }
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]:    ]
Jan 21 08:56:40 np0005590528 sharp_mendel[163019]: }
Jan 21 08:56:40 np0005590528 systemd[1]: libpod-91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819.scope: Deactivated successfully.
Jan 21 08:56:40 np0005590528 conmon[163019]: conmon 91a49741916649ec12b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819.scope/container/memory.events
Jan 21 08:56:40 np0005590528 podman[163035]: 2026-01-21 13:56:40.720020497 +0000 UTC m=+0.030365804 container died 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 08:56:40 np0005590528 systemd[1]: var-lib-containers-storage-overlay-cc66562db557b3e3aaba1201735eb9f3254db76c326117a041288836cb4002e8-merged.mount: Deactivated successfully.
Jan 21 08:56:40 np0005590528 podman[163035]: 2026-01-21 13:56:40.785966476 +0000 UTC m=+0.096311703 container remove 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 08:56:40 np0005590528 systemd[1]: libpod-conmon-91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819.scope: Deactivated successfully.
Jan 21 08:56:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:56:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:56:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:56:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:56:41 np0005590528 podman[163112]: 2026-01-21 13:56:41.240398258 +0000 UTC m=+0.021224561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:56:41 np0005590528 podman[163112]: 2026-01-21 13:56:41.440902638 +0000 UTC m=+0.221728951 container create d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:56:41 np0005590528 systemd[1]: Started libpod-conmon-d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907.scope.
Jan 21 08:56:41 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:56:41 np0005590528 podman[163112]: 2026-01-21 13:56:41.531514464 +0000 UTC m=+0.312340807 container init d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 08:56:41 np0005590528 podman[163112]: 2026-01-21 13:56:41.538448779 +0000 UTC m=+0.319275112 container start d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:56:41 np0005590528 podman[163112]: 2026-01-21 13:56:41.542735403 +0000 UTC m=+0.323561686 container attach d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 08:56:41 np0005590528 confident_buck[163128]: 167 167
Jan 21 08:56:41 np0005590528 systemd[1]: libpod-d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907.scope: Deactivated successfully.
Jan 21 08:56:41 np0005590528 podman[163112]: 2026-01-21 13:56:41.546796719 +0000 UTC m=+0.327622992 container died d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 08:56:41 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b935e79bd7fd359272612957be07fd42f2c76b48956777a118b602b67482e320-merged.mount: Deactivated successfully.
Jan 21 08:56:41 np0005590528 podman[163112]: 2026-01-21 13:56:41.60283544 +0000 UTC m=+0.383661723 container remove d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 08:56:41 np0005590528 systemd[1]: libpod-conmon-d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907.scope: Deactivated successfully.
Jan 21 08:56:41 np0005590528 podman[163152]: 2026-01-21 13:56:41.843279402 +0000 UTC m=+0.071483492 container create 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:56:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 08:56:41 np0005590528 systemd[1]: Started libpod-conmon-51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede.scope.
Jan 21 08:56:41 np0005590528 podman[163152]: 2026-01-21 13:56:41.812425493 +0000 UTC m=+0.040629643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:56:41 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:56:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5655272caa7303084f3935415d93f13f633e8f3c640bc634710394908175978f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5655272caa7303084f3935415d93f13f633e8f3c640bc634710394908175978f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5655272caa7303084f3935415d93f13f633e8f3c640bc634710394908175978f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5655272caa7303084f3935415d93f13f633e8f3c640bc634710394908175978f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:56:41 np0005590528 podman[163152]: 2026-01-21 13:56:41.945221529 +0000 UTC m=+0.173425609 container init 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 08:56:41 np0005590528 podman[163152]: 2026-01-21 13:56:41.953603419 +0000 UTC m=+0.181807469 container start 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 08:56:41 np0005590528 podman[163152]: 2026-01-21 13:56:41.958454491 +0000 UTC m=+0.186658541 container attach 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 08:56:42 np0005590528 lvm[163248]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:56:42 np0005590528 lvm[163248]: VG ceph_vg1 finished
Jan 21 08:56:42 np0005590528 lvm[163247]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:56:42 np0005590528 lvm[163247]: VG ceph_vg0 finished
Jan 21 08:56:42 np0005590528 lvm[163250]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:56:42 np0005590528 lvm[163250]: VG ceph_vg2 finished
Jan 21 08:56:42 np0005590528 nice_noyce[163169]: {}
Jan 21 08:56:42 np0005590528 systemd[1]: libpod-51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede.scope: Deactivated successfully.
Jan 21 08:56:42 np0005590528 podman[163152]: 2026-01-21 13:56:42.750525794 +0000 UTC m=+0.978729864 container died 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:56:42 np0005590528 systemd[1]: libpod-51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede.scope: Consumed 1.214s CPU time.
Jan 21 08:56:42 np0005590528 systemd[1]: var-lib-containers-storage-overlay-5655272caa7303084f3935415d93f13f633e8f3c640bc634710394908175978f-merged.mount: Deactivated successfully.
Jan 21 08:56:42 np0005590528 podman[163152]: 2026-01-21 13:56:42.804452499 +0000 UTC m=+1.032656579 container remove 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:56:42 np0005590528 systemd[1]: libpod-conmon-51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede.scope: Deactivated successfully.
Jan 21 08:56:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:56:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:56:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:56:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:56:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 08:56:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:56:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:56:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 21 08:56:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:56:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:56:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:55 np0005590528 kernel: SELinux:  Converting 2774 SID table entries...
Jan 21 08:56:55 np0005590528 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 08:56:55 np0005590528 kernel: SELinux:  policy capability open_perms=1
Jan 21 08:56:55 np0005590528 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 08:56:55 np0005590528 kernel: SELinux:  policy capability always_check_network=0
Jan 21 08:56:55 np0005590528 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 08:56:55 np0005590528 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 08:56:55 np0005590528 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 08:56:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:56:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:56:58 np0005590528 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 21 08:56:58 np0005590528 podman[163308]: 2026-01-21 13:56:58.381789236 +0000 UTC m=+0.097350786 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 21 08:56:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:04 np0005590528 podman[163340]: 2026-01-21 13:57:04.34150534 +0000 UTC m=+0.070544910 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:57:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:05 np0005590528 kernel: SELinux:  Converting 2774 SID table entries...
Jan 21 08:57:05 np0005590528 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 08:57:05 np0005590528 kernel: SELinux:  policy capability open_perms=1
Jan 21 08:57:05 np0005590528 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 08:57:05 np0005590528 kernel: SELinux:  policy capability always_check_network=0
Jan 21 08:57:05 np0005590528 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 08:57:05 np0005590528 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 08:57:05 np0005590528 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 08:57:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:57:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:57:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:57:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:57:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:57:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:57:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:29 np0005590528 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 21 08:57:29 np0005590528 podman[170169]: 2026-01-21 13:57:29.381965548 +0000 UTC m=+0.088129993 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 08:57:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:57:33.887 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 08:57:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:57:33.887 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 08:57:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:57:33.888 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 08:57:35 np0005590528 podman[173869]: 2026-01-21 13:57:35.344411796 +0000 UTC m=+0.075432103 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 21 08:57:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:57:39
Jan 21 08:57:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:57:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:57:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['backups', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images']
Jan 21 08:57:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:57:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:57:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:57:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:57:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:57:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:57:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:57:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:44 np0005590528 podman[179154]: 2026-01-21 13:57:44.06591568 +0000 UTC m=+0.048485080 container create b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 08:57:44 np0005590528 systemd[1]: Started libpod-conmon-b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4.scope.
Jan 21 08:57:44 np0005590528 podman[179154]: 2026-01-21 13:57:44.039910597 +0000 UTC m=+0.022480027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:57:44 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:57:44 np0005590528 podman[179154]: 2026-01-21 13:57:44.226300306 +0000 UTC m=+0.208869726 container init b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 21 08:57:44 np0005590528 podman[179154]: 2026-01-21 13:57:44.233969572 +0000 UTC m=+0.216539012 container start b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:57:44 np0005590528 bold_burnell[179252]: 167 167
Jan 21 08:57:44 np0005590528 systemd[1]: libpod-b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4.scope: Deactivated successfully.
Jan 21 08:57:44 np0005590528 podman[179154]: 2026-01-21 13:57:44.241010873 +0000 UTC m=+0.223580303 container attach b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:57:44 np0005590528 podman[179154]: 2026-01-21 13:57:44.241494405 +0000 UTC m=+0.224063845 container died b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 08:57:44 np0005590528 systemd[1]: var-lib-containers-storage-overlay-bcc46d0c308f97cf0f0ed84b1ad02615395755fc8f4a193726b95fa3ef6405cc-merged.mount: Deactivated successfully.
Jan 21 08:57:44 np0005590528 podman[179154]: 2026-01-21 13:57:44.322950704 +0000 UTC m=+0.305520104 container remove b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 08:57:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:57:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:57:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:57:44 np0005590528 systemd[1]: libpod-conmon-b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4.scope: Deactivated successfully.
Jan 21 08:57:44 np0005590528 podman[179491]: 2026-01-21 13:57:44.484283544 +0000 UTC m=+0.039232474 container create 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 08:57:44 np0005590528 systemd[1]: Started libpod-conmon-8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61.scope.
Jan 21 08:57:44 np0005590528 podman[179491]: 2026-01-21 13:57:44.469405222 +0000 UTC m=+0.024354162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:57:44 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:57:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:44 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:44 np0005590528 podman[179491]: 2026-01-21 13:57:44.590980686 +0000 UTC m=+0.145929726 container init 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 08:57:44 np0005590528 podman[179491]: 2026-01-21 13:57:44.601822948 +0000 UTC m=+0.156771888 container start 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 08:57:44 np0005590528 podman[179491]: 2026-01-21 13:57:44.619657532 +0000 UTC m=+0.174606802 container attach 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:57:45 np0005590528 flamboyant_brown[179558]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:57:45 np0005590528 flamboyant_brown[179558]: --> All data devices are unavailable
Jan 21 08:57:45 np0005590528 systemd[1]: libpod-8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61.scope: Deactivated successfully.
Jan 21 08:57:45 np0005590528 podman[179491]: 2026-01-21 13:57:45.104802909 +0000 UTC m=+0.659751849 container died 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 08:57:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:47 np0005590528 systemd[1]: var-lib-containers-storage-overlay-610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69-merged.mount: Deactivated successfully.
Jan 21 08:57:47 np0005590528 podman[179491]: 2026-01-21 13:57:47.289081124 +0000 UTC m=+2.844030104 container remove 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:57:47 np0005590528 systemd[1]: libpod-conmon-8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61.scope: Deactivated successfully.
Jan 21 08:57:47 np0005590528 podman[180563]: 2026-01-21 13:57:47.866318678 +0000 UTC m=+0.087513327 container create 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:57:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:47 np0005590528 podman[180563]: 2026-01-21 13:57:47.819217413 +0000 UTC m=+0.040412092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:57:47 np0005590528 systemd[1]: Started libpod-conmon-950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8.scope.
Jan 21 08:57:47 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:57:48 np0005590528 podman[180563]: 2026-01-21 13:57:48.037092577 +0000 UTC m=+0.258287256 container init 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:57:48 np0005590528 podman[180563]: 2026-01-21 13:57:48.050099753 +0000 UTC m=+0.271294402 container start 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:57:48 np0005590528 podman[180563]: 2026-01-21 13:57:48.054128781 +0000 UTC m=+0.275323450 container attach 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 08:57:48 np0005590528 objective_mclaren[180580]: 167 167
Jan 21 08:57:48 np0005590528 systemd[1]: libpod-950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8.scope: Deactivated successfully.
Jan 21 08:57:48 np0005590528 podman[180563]: 2026-01-21 13:57:48.056867257 +0000 UTC m=+0.278061966 container died 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:57:48 np0005590528 systemd[1]: var-lib-containers-storage-overlay-77fc014a14235b371651e1301b86b7cd028b157dc86c186c6863416377df5dc7-merged.mount: Deactivated successfully.
Jan 21 08:57:48 np0005590528 podman[180563]: 2026-01-21 13:57:48.154397807 +0000 UTC m=+0.375592486 container remove 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:57:48 np0005590528 systemd[1]: libpod-conmon-950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8.scope: Deactivated successfully.
Jan 21 08:57:48 np0005590528 podman[180603]: 2026-01-21 13:57:48.336595723 +0000 UTC m=+0.054374822 container create 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:57:48 np0005590528 systemd[1]: Started libpod-conmon-02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c.scope.
Jan 21 08:57:48 np0005590528 podman[180603]: 2026-01-21 13:57:48.307483726 +0000 UTC m=+0.025262865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:57:48 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:57:48 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff56d23d8093e0db83c6f1b27708ee3bd96cf31b9faaff578200c682270f5d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:48 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff56d23d8093e0db83c6f1b27708ee3bd96cf31b9faaff578200c682270f5d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:48 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff56d23d8093e0db83c6f1b27708ee3bd96cf31b9faaff578200c682270f5d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:48 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff56d23d8093e0db83c6f1b27708ee3bd96cf31b9faaff578200c682270f5d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:48 np0005590528 podman[180603]: 2026-01-21 13:57:48.427681786 +0000 UTC m=+0.145460895 container init 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:57:48 np0005590528 podman[180603]: 2026-01-21 13:57:48.445119819 +0000 UTC m=+0.162898928 container start 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:57:48 np0005590528 podman[180603]: 2026-01-21 13:57:48.456505537 +0000 UTC m=+0.174284656 container attach 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]: {
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:    "0": [
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:        {
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "devices": [
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "/dev/loop3"
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            ],
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_name": "ceph_lv0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_size": "21470642176",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "name": "ceph_lv0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "tags": {
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.cluster_name": "ceph",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.crush_device_class": "",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.encrypted": "0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.objectstore": "bluestore",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.osd_id": "0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.type": "block",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.vdo": "0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.with_tpm": "0"
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            },
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "type": "block",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "vg_name": "ceph_vg0"
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:        }
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:    ],
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:    "1": [
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:        {
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "devices": [
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "/dev/loop4"
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            ],
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_name": "ceph_lv1",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_size": "21470642176",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "name": "ceph_lv1",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "tags": {
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.cluster_name": "ceph",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.crush_device_class": "",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.encrypted": "0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.objectstore": "bluestore",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.osd_id": "1",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.type": "block",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.vdo": "0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.with_tpm": "0"
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            },
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "type": "block",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "vg_name": "ceph_vg1"
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:        }
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:    ],
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:    "2": [
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:        {
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "devices": [
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "/dev/loop5"
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            ],
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_name": "ceph_lv2",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_size": "21470642176",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "name": "ceph_lv2",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "tags": {
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.cluster_name": "ceph",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.crush_device_class": "",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.encrypted": "0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.objectstore": "bluestore",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.osd_id": "2",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.type": "block",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.vdo": "0",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:                "ceph.with_tpm": "0"
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            },
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "type": "block",
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:            "vg_name": "ceph_vg2"
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:        }
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]:    ]
Jan 21 08:57:48 np0005590528 awesome_dirac[180620]: }
Jan 21 08:57:48 np0005590528 systemd[1]: libpod-02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c.scope: Deactivated successfully.
Jan 21 08:57:48 np0005590528 podman[180603]: 2026-01-21 13:57:48.727273364 +0000 UTC m=+0.445052463 container died 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:57:48 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2ff56d23d8093e0db83c6f1b27708ee3bd96cf31b9faaff578200c682270f5d4-merged.mount: Deactivated successfully.
Jan 21 08:57:48 np0005590528 podman[180603]: 2026-01-21 13:57:48.782071336 +0000 UTC m=+0.499850435 container remove 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:57:48 np0005590528 systemd[1]: libpod-conmon-02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c.scope: Deactivated successfully.
Jan 21 08:57:49 np0005590528 podman[180705]: 2026-01-21 13:57:49.191185995 +0000 UTC m=+0.037334648 container create f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:57:49 np0005590528 systemd[1]: Started libpod-conmon-f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869.scope.
Jan 21 08:57:49 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:57:49 np0005590528 podman[180705]: 2026-01-21 13:57:49.173545517 +0000 UTC m=+0.019694150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:57:49 np0005590528 podman[180705]: 2026-01-21 13:57:49.283058347 +0000 UTC m=+0.129207040 container init f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 08:57:49 np0005590528 podman[180705]: 2026-01-21 13:57:49.292291201 +0000 UTC m=+0.138439824 container start f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 08:57:49 np0005590528 podman[180705]: 2026-01-21 13:57:49.296588176 +0000 UTC m=+0.142736809 container attach f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 08:57:49 np0005590528 intelligent_blackburn[180722]: 167 167
Jan 21 08:57:49 np0005590528 systemd[1]: libpod-f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869.scope: Deactivated successfully.
Jan 21 08:57:49 np0005590528 podman[180705]: 2026-01-21 13:57:49.298798979 +0000 UTC m=+0.144947642 container died f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 08:57:49 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d29fb1b50d2e96b3b90d4a0c4e8be3565938abc4311ff804ddf00233afaf2609-merged.mount: Deactivated successfully.
Jan 21 08:57:49 np0005590528 podman[180705]: 2026-01-21 13:57:49.397751183 +0000 UTC m=+0.243899806 container remove f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:57:49 np0005590528 systemd[1]: libpod-conmon-f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869.scope: Deactivated successfully.
Jan 21 08:57:49 np0005590528 podman[180748]: 2026-01-21 13:57:49.552233836 +0000 UTC m=+0.042812010 container create 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:57:49 np0005590528 systemd[1]: Started libpod-conmon-59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd.scope.
Jan 21 08:57:49 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:57:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb44bb2a62ffe67e6276dfad76ae8751933f0a207f27519e6321feee2bd929/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb44bb2a62ffe67e6276dfad76ae8751933f0a207f27519e6321feee2bd929/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb44bb2a62ffe67e6276dfad76ae8751933f0a207f27519e6321feee2bd929/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb44bb2a62ffe67e6276dfad76ae8751933f0a207f27519e6321feee2bd929/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:57:49 np0005590528 podman[180748]: 2026-01-21 13:57:49.533791928 +0000 UTC m=+0.024370102 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:57:49 np0005590528 podman[180748]: 2026-01-21 13:57:49.688776833 +0000 UTC m=+0.179355027 container init 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Jan 21 08:57:49 np0005590528 podman[180748]: 2026-01-21 13:57:49.696530321 +0000 UTC m=+0.187108475 container start 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:57:49 np0005590528 podman[180748]: 2026-01-21 13:57:49.700060208 +0000 UTC m=+0.190638382 container attach 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 08:57:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:50 np0005590528 lvm[180847]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:57:50 np0005590528 lvm[180847]: VG ceph_vg0 finished
Jan 21 08:57:50 np0005590528 lvm[180848]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:57:50 np0005590528 lvm[180848]: VG ceph_vg1 finished
Jan 21 08:57:50 np0005590528 lvm[180850]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:57:50 np0005590528 lvm[180850]: VG ceph_vg2 finished
Jan 21 08:57:50 np0005590528 lvm[180851]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:57:50 np0005590528 lvm[180851]: VG ceph_vg0 finished
Jan 21 08:57:50 np0005590528 lvm[180853]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:57:50 np0005590528 lvm[180853]: VG ceph_vg2 finished
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:57:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:57:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:50 np0005590528 lvm[180854]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:57:50 np0005590528 lvm[180854]: VG ceph_vg2 finished
Jan 21 08:57:50 np0005590528 flamboyant_heyrovsky[180769]: {}
Jan 21 08:57:50 np0005590528 systemd[1]: libpod-59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd.scope: Deactivated successfully.
Jan 21 08:57:50 np0005590528 systemd[1]: libpod-59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd.scope: Consumed 1.276s CPU time.
Jan 21 08:57:50 np0005590528 podman[180748]: 2026-01-21 13:57:50.568434764 +0000 UTC m=+1.059012928 container died 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:57:50 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9fdb44bb2a62ffe67e6276dfad76ae8751933f0a207f27519e6321feee2bd929-merged.mount: Deactivated successfully.
Jan 21 08:57:50 np0005590528 podman[180748]: 2026-01-21 13:57:50.690307595 +0000 UTC m=+1.180885749 container remove 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 08:57:50 np0005590528 systemd[1]: libpod-conmon-59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd.scope: Deactivated successfully.
Jan 21 08:57:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:57:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:57:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:57:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:57:51 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:57:51 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:57:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:57:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:57:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:00 np0005590528 podman[180906]: 2026-01-21 13:58:00.404464035 +0000 UTC m=+0.116692936 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 08:58:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:01 np0005590528 kernel: SELinux:  Converting 2775 SID table entries...
Jan 21 08:58:01 np0005590528 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 08:58:01 np0005590528 kernel: SELinux:  policy capability open_perms=1
Jan 21 08:58:01 np0005590528 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 08:58:01 np0005590528 kernel: SELinux:  policy capability always_check_network=0
Jan 21 08:58:01 np0005590528 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 08:58:01 np0005590528 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 08:58:01 np0005590528 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 08:58:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:03 np0005590528 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 08:58:03 np0005590528 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 21 08:58:03 np0005590528 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 08:58:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:06 np0005590528 podman[180988]: 2026-01-21 13:58:06.389410784 +0000 UTC m=+0.089118886 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 21 08:58:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:58:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:58:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:58:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:58:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:58:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:58:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.378203) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003894378233, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 3568840, "memory_usage": 3621936, "flush_reason": "Manual Compaction"}
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003894568582, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3482571, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9711, "largest_seqno": 11748, "table_properties": {"data_size": 3473314, "index_size": 5879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17768, "raw_average_key_size": 19, "raw_value_size": 3454962, "raw_average_value_size": 3780, "num_data_blocks": 267, "num_entries": 914, "num_filter_entries": 914, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003660, "oldest_key_time": 1769003660, "file_creation_time": 1769003894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 190423 microseconds, and 5937 cpu microseconds.
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.568625) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3482571 bytes OK
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.568643) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.802823) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.802867) EVENT_LOG_v1 {"time_micros": 1769003894802858, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.802892) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3560347, prev total WAL file size 3560347, number of live WAL files 2.
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.804028) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3400KB)], [26(6025KB)]
Jan 21 08:58:14 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003894804074, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9652888, "oldest_snapshot_seqno": -1}
Jan 21 08:58:15 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3705 keys, 8083234 bytes, temperature: kUnknown
Jan 21 08:58:15 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003895882613, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8083234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8054890, "index_size": 17994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88907, "raw_average_key_size": 23, "raw_value_size": 7984453, "raw_average_value_size": 2155, "num_data_blocks": 779, "num_entries": 3705, "num_filter_entries": 3705, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769003894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 21 08:58:15 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 08:58:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:15.882960) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8083234 bytes
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.955230) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 8.9 rd, 7.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4219, records dropped: 514 output_compression: NoCompression
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.955295) EVENT_LOG_v1 {"time_micros": 1769003896955271, "job": 10, "event": "compaction_finished", "compaction_time_micros": 1078675, "compaction_time_cpu_micros": 19323, "output_level": 6, "num_output_files": 1, "total_output_size": 8083234, "num_input_records": 4219, "num_output_records": 3705, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003896956333, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003896957421, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.803949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.957675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.957685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.957694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.957697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:58:16 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.957700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 08:58:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:19 np0005590528 systemd[1]: Stopping OpenSSH server daemon...
Jan 21 08:58:19 np0005590528 systemd[1]: sshd.service: Deactivated successfully.
Jan 21 08:58:19 np0005590528 systemd[1]: Stopped OpenSSH server daemon.
Jan 21 08:58:19 np0005590528 systemd[1]: sshd.service: Consumed 2.578s CPU time, read 32.0K from disk, written 0B to disk.
Jan 21 08:58:19 np0005590528 systemd[1]: Stopped target sshd-keygen.target.
Jan 21 08:58:19 np0005590528 systemd[1]: Stopping sshd-keygen.target...
Jan 21 08:58:19 np0005590528 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 08:58:19 np0005590528 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 08:58:19 np0005590528 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 08:58:19 np0005590528 systemd[1]: Reached target sshd-keygen.target.
Jan 21 08:58:19 np0005590528 systemd[1]: Starting OpenSSH server daemon...
Jan 21 08:58:19 np0005590528 systemd[1]: Started OpenSSH server daemon.
Jan 21 08:58:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:21 np0005590528 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 08:58:21 np0005590528 systemd[1]: Starting man-db-cache-update.service...
Jan 21 08:58:21 np0005590528 systemd[1]: Reloading.
Jan 21 08:58:21 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:58:21 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:58:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:22 np0005590528 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 08:58:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:28 np0005590528 python3.9[188632]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 08:58:28 np0005590528 systemd[1]: Reloading.
Jan 21 08:58:28 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:58:28 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:58:29 np0005590528 python3.9[189945]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 08:58:29 np0005590528 systemd[1]: Reloading.
Jan 21 08:58:29 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:58:29 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:58:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:30 np0005590528 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 08:58:30 np0005590528 systemd[1]: Finished man-db-cache-update.service.
Jan 21 08:58:30 np0005590528 systemd[1]: man-db-cache-update.service: Consumed 10.801s CPU time.
Jan 21 08:58:30 np0005590528 systemd[1]: run-r6e0f824e1e6d45e7b1896cba5e1c78fd.service: Deactivated successfully.
Jan 21 08:58:30 np0005590528 podman[190950]: 2026-01-21 13:58:30.587269874 +0000 UTC m=+0.109036230 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 21 08:58:30 np0005590528 python3.9[191004]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 08:58:30 np0005590528 systemd[1]: Reloading.
Jan 21 08:58:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:30 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:58:30 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:58:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:31 np0005590528 python3.9[191198]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 08:58:31 np0005590528 systemd[1]: Reloading.
Jan 21 08:58:32 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:58:32 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:58:33 np0005590528 python3.9[191389]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:33 np0005590528 systemd[1]: Reloading.
Jan 21 08:58:33 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:58:33 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:58:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:58:33.887 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 08:58:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:58:33.888 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 08:58:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:58:33.888 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 08:58:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:34 np0005590528 python3.9[191579]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:34 np0005590528 systemd[1]: Reloading.
Jan 21 08:58:34 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:58:34 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:58:35 np0005590528 python3.9[191770]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:35 np0005590528 systemd[1]: Reloading.
Jan 21 08:58:35 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:58:35 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:58:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:36 np0005590528 python3.9[191960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:36 np0005590528 podman[191962]: 2026-01-21 13:58:36.551180223 +0000 UTC m=+0.105530002 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Jan 21 08:58:37 np0005590528 python3.9[192135]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:37 np0005590528 systemd[1]: Reloading.
Jan 21 08:58:37 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:58:37 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:58:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:38 np0005590528 python3.9[192325]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 08:58:38 np0005590528 systemd[1]: Reloading.
Jan 21 08:58:39 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:58:39 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:58:39 np0005590528 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 21 08:58:39 np0005590528 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 21 08:58:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:58:39
Jan 21 08:58:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:58:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:58:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups', 'vms', 'default.rgw.control', '.rgw.root']
Jan 21 08:58:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:58:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:40 np0005590528 python3.9[192517]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:40 np0005590528 python3.9[192672]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:58:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:58:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:58:40 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:58:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:42 np0005590528 python3.9[192827]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:43 np0005590528 python3.9[192982]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:44 np0005590528 python3.9[193137]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:45 np0005590528 python3.9[193292]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:46 np0005590528 python3.9[193447]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:46 np0005590528 python3.9[193602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:47 np0005590528 python3.9[193757]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:48 np0005590528 python3.9[193912]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:49 np0005590528 python3.9[194067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:50 np0005590528 python3.9[194222]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:58:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:58:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:51 np0005590528 python3.9[194377]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:51 np0005590528 python3.9[194602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:58:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:58:52 np0005590528 podman[194829]: 2026-01-21 13:58:52.723645147 +0000 UTC m=+0.031600469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:58:52 np0005590528 python3.9[194838]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:58:52 np0005590528 podman[194829]: 2026-01-21 13:58:52.970788656 +0000 UTC m=+0.278743948 container create caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:58:53 np0005590528 systemd[1]: Started libpod-conmon-caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292.scope.
Jan 21 08:58:53 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:58:53 np0005590528 python3.9[195003]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:58:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:58:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:58:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:58:53 np0005590528 podman[194829]: 2026-01-21 13:58:53.75133611 +0000 UTC m=+1.059291502 container init caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 08:58:53 np0005590528 podman[194829]: 2026-01-21 13:58:53.766765384 +0000 UTC m=+1.074720676 container start caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:58:53 np0005590528 relaxed_morse[194960]: 167 167
Jan 21 08:58:53 np0005590528 systemd[1]: libpod-caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292.scope: Deactivated successfully.
Jan 21 08:58:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:53 np0005590528 podman[194829]: 2026-01-21 13:58:53.945165504 +0000 UTC m=+1.253120806 container attach caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 08:58:53 np0005590528 podman[194829]: 2026-01-21 13:58:53.945635676 +0000 UTC m=+1.253590978 container died caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:58:54 np0005590528 python3.9[195169]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:58:54 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9094fba0c0fe3fac2438c586e74dc969eb09d1251db311695b788167f4ab754f-merged.mount: Deactivated successfully.
Jan 21 08:58:54 np0005590528 podman[194829]: 2026-01-21 13:58:54.475471436 +0000 UTC m=+1.783426728 container remove caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 08:58:54 np0005590528 systemd[1]: libpod-conmon-caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292.scope: Deactivated successfully.
Jan 21 08:58:55 np0005590528 podman[195276]: 2026-01-21 13:58:55.122129641 +0000 UTC m=+0.515029122 container create d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 08:58:55 np0005590528 systemd[1]: Started libpod-conmon-d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1.scope.
Jan 21 08:58:55 np0005590528 podman[195276]: 2026-01-21 13:58:55.098344994 +0000 UTC m=+0.491244595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:58:55 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:58:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:55 np0005590528 podman[195276]: 2026-01-21 13:58:55.216160573 +0000 UTC m=+0.609060084 container init d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 08:58:55 np0005590528 podman[195276]: 2026-01-21 13:58:55.225145672 +0000 UTC m=+0.618045163 container start d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 08:58:55 np0005590528 podman[195276]: 2026-01-21 13:58:55.241126219 +0000 UTC m=+0.634025760 container attach d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 08:58:55 np0005590528 python3.9[195340]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:58:55 np0005590528 serene_sutherland[195346]: --> passed data devices: 0 physical, 3 LVM
Jan 21 08:58:55 np0005590528 serene_sutherland[195346]: --> All data devices are unavailable
Jan 21 08:58:55 np0005590528 systemd[1]: libpod-d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1.scope: Deactivated successfully.
Jan 21 08:58:55 np0005590528 podman[195276]: 2026-01-21 13:58:55.717057921 +0000 UTC m=+1.109957422 container died d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:58:55 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f-merged.mount: Deactivated successfully.
Jan 21 08:58:55 np0005590528 podman[195276]: 2026-01-21 13:58:55.772152637 +0000 UTC m=+1.165052118 container remove d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:58:55 np0005590528 systemd[1]: libpod-conmon-d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1.scope: Deactivated successfully.
Jan 21 08:58:55 np0005590528 python3.9[195517]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:58:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:58:56 np0005590528 podman[195670]: 2026-01-21 13:58:56.186462773 +0000 UTC m=+0.043114867 container create b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 08:58:56 np0005590528 systemd[1]: Started libpod-conmon-b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224.scope.
Jan 21 08:58:56 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:58:56 np0005590528 podman[195670]: 2026-01-21 13:58:56.163276421 +0000 UTC m=+0.019928535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:58:56 np0005590528 podman[195670]: 2026-01-21 13:58:56.312074262 +0000 UTC m=+0.168726376 container init b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:58:56 np0005590528 podman[195670]: 2026-01-21 13:58:56.320147008 +0000 UTC m=+0.176799102 container start b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:58:56 np0005590528 podman[195670]: 2026-01-21 13:58:56.3235199 +0000 UTC m=+0.180172034 container attach b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 08:58:56 np0005590528 systemd[1]: libpod-b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224.scope: Deactivated successfully.
Jan 21 08:58:56 np0005590528 gracious_proskuriakova[195714]: 167 167
Jan 21 08:58:56 np0005590528 conmon[195714]: conmon b9e406e34e94434a98f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224.scope/container/memory.events
Jan 21 08:58:56 np0005590528 podman[195670]: 2026-01-21 13:58:56.326371709 +0000 UTC m=+0.183023813 container died b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:58:56 np0005590528 systemd[1]: var-lib-containers-storage-overlay-5387a821d54811110a18fb71888313887b2f825a3b99e4a786577cd54b91669e-merged.mount: Deactivated successfully.
Jan 21 08:58:56 np0005590528 podman[195670]: 2026-01-21 13:58:56.449778204 +0000 UTC m=+0.306430338 container remove b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 08:58:56 np0005590528 systemd[1]: libpod-conmon-b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224.scope: Deactivated successfully.
Jan 21 08:58:56 np0005590528 python3.9[195764]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 08:58:56 np0005590528 podman[195788]: 2026-01-21 13:58:56.6333529 +0000 UTC m=+0.040157805 container create d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:58:56 np0005590528 systemd[1]: Started libpod-conmon-d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27.scope.
Jan 21 08:58:56 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:58:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdab4d26a7ea7f1d7bd552142995833cc25a14745118d787d18f2305270d81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdab4d26a7ea7f1d7bd552142995833cc25a14745118d787d18f2305270d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdab4d26a7ea7f1d7bd552142995833cc25a14745118d787d18f2305270d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdab4d26a7ea7f1d7bd552142995833cc25a14745118d787d18f2305270d81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:56 np0005590528 podman[195788]: 2026-01-21 13:58:56.616015519 +0000 UTC m=+0.022820444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:58:56 np0005590528 podman[195788]: 2026-01-21 13:58:56.756045768 +0000 UTC m=+0.162850703 container init d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 08:58:56 np0005590528 podman[195788]: 2026-01-21 13:58:56.766342927 +0000 UTC m=+0.173147842 container start d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 08:58:56 np0005590528 podman[195788]: 2026-01-21 13:58:56.773809609 +0000 UTC m=+0.180614514 container attach d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:58:57 np0005590528 sad_tesla[195825]: {
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:    "0": [
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:        {
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "devices": [
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "/dev/loop3"
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            ],
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_name": "ceph_lv0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_size": "21470642176",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "name": "ceph_lv0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "tags": {
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.cluster_name": "ceph",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.crush_device_class": "",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.encrypted": "0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.objectstore": "bluestore",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.osd_id": "0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.type": "block",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.vdo": "0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.with_tpm": "0"
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            },
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "type": "block",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "vg_name": "ceph_vg0"
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:        }
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:    ],
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:    "1": [
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:        {
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "devices": [
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "/dev/loop4"
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            ],
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_name": "ceph_lv1",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_size": "21470642176",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "name": "ceph_lv1",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "tags": {
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.cluster_name": "ceph",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.crush_device_class": "",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.encrypted": "0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.objectstore": "bluestore",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.osd_id": "1",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.type": "block",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.vdo": "0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.with_tpm": "0"
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            },
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "type": "block",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "vg_name": "ceph_vg1"
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:        }
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:    ],
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:    "2": [
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:        {
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "devices": [
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "/dev/loop5"
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            ],
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_name": "ceph_lv2",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_size": "21470642176",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "name": "ceph_lv2",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "tags": {
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.cephx_lockbox_secret": "",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.cluster_name": "ceph",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.crush_device_class": "",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.encrypted": "0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.objectstore": "bluestore",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.osd_id": "2",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.type": "block",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.vdo": "0",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:                "ceph.with_tpm": "0"
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            },
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "type": "block",
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:            "vg_name": "ceph_vg2"
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:        }
Jan 21 08:58:57 np0005590528 sad_tesla[195825]:    ]
Jan 21 08:58:57 np0005590528 sad_tesla[195825]: }
Jan 21 08:58:57 np0005590528 systemd[1]: libpod-d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27.scope: Deactivated successfully.
Jan 21 08:58:57 np0005590528 conmon[195825]: conmon d66f3a509c0533255c6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27.scope/container/memory.events
Jan 21 08:58:57 np0005590528 podman[195788]: 2026-01-21 13:58:57.095736113 +0000 UTC m=+0.502541018 container died d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 08:58:57 np0005590528 systemd[1]: var-lib-containers-storage-overlay-23fdab4d26a7ea7f1d7bd552142995833cc25a14745118d787d18f2305270d81-merged.mount: Deactivated successfully.
Jan 21 08:58:57 np0005590528 podman[195788]: 2026-01-21 13:58:57.143764768 +0000 UTC m=+0.550569673 container remove d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 08:58:57 np0005590528 systemd[1]: libpod-conmon-d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27.scope: Deactivated successfully.
Jan 21 08:58:57 np0005590528 python3.9[195959]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 08:58:57 np0005590528 podman[196082]: 2026-01-21 13:58:57.584891245 +0000 UTC m=+0.041768255 container create bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 21 08:58:57 np0005590528 systemd[1]: Started libpod-conmon-bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1.scope.
Jan 21 08:58:57 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:58:57 np0005590528 podman[196082]: 2026-01-21 13:58:57.566794646 +0000 UTC m=+0.023671706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:58:57 np0005590528 podman[196082]: 2026-01-21 13:58:57.673839904 +0000 UTC m=+0.130716944 container init bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 08:58:57 np0005590528 podman[196082]: 2026-01-21 13:58:57.681303095 +0000 UTC m=+0.138180095 container start bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 08:58:57 np0005590528 podman[196082]: 2026-01-21 13:58:57.685593539 +0000 UTC m=+0.142470579 container attach bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 08:58:57 np0005590528 silly_montalcini[196126]: 167 167
Jan 21 08:58:57 np0005590528 systemd[1]: libpod-bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1.scope: Deactivated successfully.
Jan 21 08:58:57 np0005590528 podman[196082]: 2026-01-21 13:58:57.690952819 +0000 UTC m=+0.147829859 container died bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 08:58:57 np0005590528 systemd[1]: var-lib-containers-storage-overlay-a8de7d01bb20ae9f13ec5bcf3bcb1d63d1fc17df48f4852e9d9268b47c92e4c4-merged.mount: Deactivated successfully.
Jan 21 08:58:57 np0005590528 podman[196082]: 2026-01-21 13:58:57.730620932 +0000 UTC m=+0.187497942 container remove bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 08:58:57 np0005590528 systemd[1]: libpod-conmon-bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1.scope: Deactivated successfully.
Jan 21 08:58:57 np0005590528 podman[196194]: 2026-01-21 13:58:57.920752117 +0000 UTC m=+0.041990530 container create 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:58:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:58:57 np0005590528 systemd[1]: Started libpod-conmon-4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44.scope.
Jan 21 08:58:57 np0005590528 systemd[1]: Started libcrun container.
Jan 21 08:58:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb6ba8b11817416889d9db465a904e2a2c84481d4b5b3bdb476f4e40c33f8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb6ba8b11817416889d9db465a904e2a2c84481d4b5b3bdb476f4e40c33f8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb6ba8b11817416889d9db465a904e2a2c84481d4b5b3bdb476f4e40c33f8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb6ba8b11817416889d9db465a904e2a2c84481d4b5b3bdb476f4e40c33f8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 08:58:57 np0005590528 podman[196194]: 2026-01-21 13:58:57.904330668 +0000 UTC m=+0.025569061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 08:58:58 np0005590528 podman[196194]: 2026-01-21 13:58:58.007963843 +0000 UTC m=+0.129202256 container init 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 08:58:58 np0005590528 podman[196194]: 2026-01-21 13:58:58.018051548 +0000 UTC m=+0.139289941 container start 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 08:58:58 np0005590528 podman[196194]: 2026-01-21 13:58:58.022619549 +0000 UTC m=+0.143857972 container attach 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:58:58 np0005590528 python3.9[196244]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:58:58 np0005590528 lvm[196426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 08:58:58 np0005590528 lvm[196426]: VG ceph_vg0 finished
Jan 21 08:58:58 np0005590528 lvm[196434]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 08:58:58 np0005590528 lvm[196434]: VG ceph_vg1 finished
Jan 21 08:58:58 np0005590528 lvm[196447]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 08:58:58 np0005590528 lvm[196447]: VG ceph_vg2 finished
Jan 21 08:58:58 np0005590528 dreamy_cerf[196242]: {}
Jan 21 08:58:58 np0005590528 systemd[1]: libpod-4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44.scope: Deactivated successfully.
Jan 21 08:58:58 np0005590528 systemd[1]: libpod-4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44.scope: Consumed 1.265s CPU time.
Jan 21 08:58:58 np0005590528 podman[196194]: 2026-01-21 13:58:58.864113593 +0000 UTC m=+0.985352006 container died 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 08:58:58 np0005590528 python3.9[196449]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003937.5232837-557-10781633875981/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:58:59 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9fdb6ba8b11817416889d9db465a904e2a2c84481d4b5b3bdb476f4e40c33f8e-merged.mount: Deactivated successfully.
Jan 21 08:58:59 np0005590528 podman[196194]: 2026-01-21 13:58:59.069073038 +0000 UTC m=+1.190311431 container remove 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 08:58:59 np0005590528 systemd[1]: libpod-conmon-4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44.scope: Deactivated successfully.
Jan 21 08:58:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 08:58:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:58:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 08:58:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:58:59 np0005590528 python3.9[196639]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:58:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:00 np0005590528 python3.9[196764]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003939.0820577-557-264960872901528/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:00 np0005590528 podman[196916]: 2026-01-21 13:59:00.803914984 +0000 UTC m=+0.122326159 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 08:59:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:02 np0005590528 python3.9[196917]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:02 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:59:02 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:59:03 np0005590528 python3.9[197067]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003940.374972-557-165156706703402/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:03 np0005590528 python3.9[197219]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:04 np0005590528 python3.9[197344]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003943.3088322-557-112566675152275/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:04 np0005590528 python3.9[197496]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:05 np0005590528 python3.9[197621]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003944.438018-557-122441901921560/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:06 np0005590528 python3.9[197773]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:06 np0005590528 python3.9[197898]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003945.6471157-557-56167089212790/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:06 np0005590528 podman[197899]: 2026-01-21 13:59:06.816449095 +0000 UTC m=+0.047722199 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 21 08:59:07 np0005590528 python3.9[198069]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:08 np0005590528 python3.9[198192]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003946.9041576-557-22620768676200/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:08 np0005590528 python3.9[198344]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:09 np0005590528 python3.9[198469]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003948.170144-557-142586222554499/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:09 np0005590528 python3.9[198621]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 21 08:59:10 np0005590528 python3.9[198774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:59:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:59:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:59:10 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:59:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:59:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:59:11 np0005590528 python3.9[198926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:11 np0005590528 python3.9[199078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:12 np0005590528 python3.9[199230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:13 np0005590528 python3.9[199382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:13 np0005590528 python3.9[199534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:14 np0005590528 python3.9[199686]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:15 np0005590528 python3.9[199838]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:15 np0005590528 python3.9[199990]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:16 np0005590528 python3.9[200142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:16 np0005590528 python3.9[200294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:17 np0005590528 python3.9[200446]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:18 np0005590528 python3.9[200598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:19 np0005590528 python3.9[200750]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:19 np0005590528 python3.9[200902]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:20 np0005590528 python3.9[201025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003959.2498343-778-275291954432556/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:20 np0005590528 python3.9[201177]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:21 np0005590528 python3.9[201300]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003960.453589-778-73771734669294/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:22 np0005590528 python3.9[201452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:22 np0005590528 python3.9[201575]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003961.6753933-778-83109666681728/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:23 np0005590528 python3.9[201727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:23 np0005590528 python3.9[201850]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003962.8937893-778-64823596107619/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:24 np0005590528 python3.9[202002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:24 np0005590528 python3.9[202125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003963.9531476-778-56925722585062/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:25 np0005590528 python3.9[202277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:26 np0005590528 python3.9[202400]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003965.0572498-778-5756098653459/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:26 np0005590528 python3.9[202552]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:27 np0005590528 python3.9[202675]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003966.368809-778-266475788483077/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:28 np0005590528 python3.9[202827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:28 np0005590528 python3.9[202950]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003967.5740592-778-231667736682513/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:29 np0005590528 python3.9[203102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:29 np0005590528 python3.9[203225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003968.7920122-778-26135731959570/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:30 np0005590528 python3.9[203377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:31 np0005590528 podman[203472]: 2026-01-21 13:59:31.07837091 +0000 UTC m=+0.102776256 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:59:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:31 np0005590528 python3.9[203521]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003970.0081892-778-101188716641056/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:31 np0005590528 python3.9[203678]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:32 np0005590528 python3.9[203801]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003971.4297714-778-276506536456901/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:33 np0005590528 python3.9[203953]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:33 np0005590528 python3.9[204076]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003972.7090404-778-121441671472826/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:59:33.888 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 08:59:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:59:33.888 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 08:59:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 13:59:33.889 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 08:59:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:34 np0005590528 python3.9[204228]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:35 np0005590528 python3.9[204351]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003974.0347362-778-10794286874459/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:35 np0005590528 python3.9[204503]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:36 np0005590528 python3.9[204626]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003975.4912531-778-199784469938911/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:37 np0005590528 podman[204750]: 2026-01-21 13:59:37.145484295 +0000 UTC m=+0.068636117 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 08:59:37 np0005590528 python3.9[204788]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:59:37 np0005590528 auditd[698]: Audit daemon rotating log files
Jan 21 08:59:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:38 np0005590528 python3.9[204949]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 21 08:59:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:59:39
Jan 21 08:59:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 08:59:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 08:59:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'backups', 'images']
Jan 21 08:59:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 08:59:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:40 np0005590528 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 21 08:59:40 np0005590528 python3.9[205105]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 08:59:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:41 np0005590528 python3.9[205257]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:42 np0005590528 python3.9[205409]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:42 np0005590528 python3.9[205561]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:43 np0005590528 python3.9[205713]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:44 np0005590528 python3.9[205865]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:44 np0005590528 python3.9[206017]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:45 np0005590528 python3.9[206169]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:46 np0005590528 python3.9[206321]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:47 np0005590528 python3.9[206473]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:47 np0005590528 python3.9[206625]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:59:47 np0005590528 systemd[1]: Reloading.
Jan 21 08:59:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:48 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:59:48 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:59:48 np0005590528 systemd[1]: Starting libvirt logging daemon socket...
Jan 21 08:59:48 np0005590528 systemd[1]: Listening on libvirt logging daemon socket.
Jan 21 08:59:48 np0005590528 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 21 08:59:48 np0005590528 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 21 08:59:48 np0005590528 systemd[1]: Starting libvirt logging daemon...
Jan 21 08:59:48 np0005590528 systemd[1]: Started libvirt logging daemon.
Jan 21 08:59:49 np0005590528 python3.9[206818]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:59:49 np0005590528 systemd[1]: Reloading.
Jan 21 08:59:49 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:59:49 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:59:49 np0005590528 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 21 08:59:49 np0005590528 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 21 08:59:49 np0005590528 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 21 08:59:49 np0005590528 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 21 08:59:49 np0005590528 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 21 08:59:49 np0005590528 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 21 08:59:49 np0005590528 systemd[1]: Starting libvirt nodedev daemon...
Jan 21 08:59:49 np0005590528 systemd[1]: Started libvirt nodedev daemon.
Jan 21 08:59:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:50 np0005590528 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 21 08:59:50 np0005590528 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 08:59:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 08:59:50 np0005590528 python3.9[207035]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:59:50 np0005590528 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 21 08:59:50 np0005590528 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 21 08:59:50 np0005590528 systemd[1]: Reloading.
Jan 21 08:59:50 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:59:50 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:59:50 np0005590528 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 21 08:59:50 np0005590528 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 21 08:59:50 np0005590528 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 21 08:59:50 np0005590528 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 21 08:59:50 np0005590528 systemd[1]: Starting libvirt proxy daemon...
Jan 21 08:59:51 np0005590528 systemd[1]: Started libvirt proxy daemon.
Jan 21 08:59:51 np0005590528 setroubleshoot[206962]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a05a998e-ab93-4a8a-b3db-5ee3c9b943d9
Jan 21 08:59:51 np0005590528 setroubleshoot[206962]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 21 08:59:51 np0005590528 setroubleshoot[206962]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a05a998e-ab93-4a8a-b3db-5ee3c9b943d9
Jan 21 08:59:51 np0005590528 setroubleshoot[206962]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 21 08:59:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:51 np0005590528 python3.9[207256]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:59:51 np0005590528 systemd[1]: Reloading.
Jan 21 08:59:51 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:59:51 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:59:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:52 np0005590528 systemd[1]: Listening on libvirt locking daemon socket.
Jan 21 08:59:52 np0005590528 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 21 08:59:52 np0005590528 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 21 08:59:52 np0005590528 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 21 08:59:52 np0005590528 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 21 08:59:52 np0005590528 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 21 08:59:52 np0005590528 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 21 08:59:52 np0005590528 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 21 08:59:52 np0005590528 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 21 08:59:52 np0005590528 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 21 08:59:52 np0005590528 systemd[1]: Starting libvirt QEMU daemon...
Jan 21 08:59:52 np0005590528 systemd[1]: Started libvirt QEMU daemon.
Jan 21 08:59:53 np0005590528 python3.9[207471]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 08:59:53 np0005590528 systemd[1]: Reloading.
Jan 21 08:59:53 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 08:59:53 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 08:59:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:54 np0005590528 systemd[1]: Starting libvirt secret daemon socket...
Jan 21 08:59:54 np0005590528 systemd[1]: Listening on libvirt secret daemon socket.
Jan 21 08:59:54 np0005590528 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 21 08:59:54 np0005590528 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 21 08:59:54 np0005590528 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 21 08:59:54 np0005590528 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 21 08:59:54 np0005590528 systemd[1]: Starting libvirt secret daemon...
Jan 21 08:59:54 np0005590528 systemd[1]: Started libvirt secret daemon.
Jan 21 08:59:55 np0005590528 python3.9[207683]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:55 np0005590528 python3.9[207835]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 08:59:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:56 np0005590528 python3.9[207987]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:59:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 08:59:57 np0005590528 python3.9[208141]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 08:59:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:58 np0005590528 python3.9[208291]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 08:59:58 np0005590528 python3.9[208412]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003997.6639302-1136-173352602892668/.source.xml follow=False _original_basename=secret.xml.j2 checksum=d27a26758af4fbf69deaa7c87560773282374616 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 08:59:59 np0005590528 python3.9[208564]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 2f0e9cad-f0a3-5869-9cc3-8d84d071866a#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 08:59:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 08:59:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:00:00 np0005590528 python3.9[208801]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:00 np0005590528 podman[208943]: 2026-01-21 14:00:00.336781078 +0000 UTC m=+0.038995717 container create 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 09:00:00 np0005590528 systemd[1]: Started libpod-conmon-8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96.scope.
Jan 21 09:00:00 np0005590528 podman[208943]: 2026-01-21 14:00:00.318602367 +0000 UTC m=+0.020816916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:00:00 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:00:00 np0005590528 podman[208943]: 2026-01-21 14:00:00.613968612 +0000 UTC m=+0.316183161 container init 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 09:00:00 np0005590528 podman[208943]: 2026-01-21 14:00:00.627248614 +0000 UTC m=+0.329463143 container start 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:00:00 np0005590528 podman[208943]: 2026-01-21 14:00:00.633120637 +0000 UTC m=+0.335335196 container attach 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 09:00:00 np0005590528 unruffled_davinci[208982]: 167 167
Jan 21 09:00:00 np0005590528 systemd[1]: libpod-8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96.scope: Deactivated successfully.
Jan 21 09:00:00 np0005590528 podman[208943]: 2026-01-21 14:00:00.636421956 +0000 UTC m=+0.338636505 container died 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 09:00:00 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0f5d18cb8f9a568bfd15fd8debeb9c85ac261f492232d2c36ba7c3b439ddf9cc-merged.mount: Deactivated successfully.
Jan 21 09:00:00 np0005590528 podman[208943]: 2026-01-21 14:00:00.70125348 +0000 UTC m=+0.403468039 container remove 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:00:00 np0005590528 systemd[1]: libpod-conmon-8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96.scope: Deactivated successfully.
Jan 21 09:00:00 np0005590528 podman[209083]: 2026-01-21 14:00:00.95233544 +0000 UTC m=+0.065423818 container create c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:00:01 np0005590528 systemd[1]: Started libpod-conmon-c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333.scope.
Jan 21 09:00:01 np0005590528 podman[209083]: 2026-01-21 14:00:00.926176265 +0000 UTC m=+0.039264653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:00:01 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:00:01 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:01 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:01 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:01 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:01 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:01 np0005590528 podman[209083]: 2026-01-21 14:00:01.064334367 +0000 UTC m=+0.177422735 container init c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:00:01 np0005590528 podman[209083]: 2026-01-21 14:00:01.072340071 +0000 UTC m=+0.185428409 container start c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:00:01 np0005590528 podman[209083]: 2026-01-21 14:00:01.07725707 +0000 UTC m=+0.190345438 container attach c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 09:00:01 np0005590528 podman[209205]: 2026-01-21 14:00:01.365084542 +0000 UTC m=+0.084797628 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 21 09:00:01 np0005590528 frosty_moser[209131]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:00:01 np0005590528 frosty_moser[209131]: --> All data devices are unavailable
Jan 21 09:00:01 np0005590528 systemd[1]: libpod-c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333.scope: Deactivated successfully.
Jan 21 09:00:01 np0005590528 podman[209083]: 2026-01-21 14:00:01.549033384 +0000 UTC m=+0.662121742 container died c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 09:00:01 np0005590528 systemd[1]: var-lib-containers-storage-overlay-175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b-merged.mount: Deactivated successfully.
Jan 21 09:00:01 np0005590528 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 21 09:00:01 np0005590528 podman[209083]: 2026-01-21 14:00:01.627133729 +0000 UTC m=+0.740222067 container remove c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:00:01 np0005590528 systemd[1]: libpod-conmon-c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333.scope: Deactivated successfully.
Jan 21 09:00:01 np0005590528 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 21 09:00:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:02 np0005590528 podman[209441]: 2026-01-21 14:00:02.121835259 +0000 UTC m=+0.039073099 container create 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:00:02 np0005590528 systemd[1]: Started libpod-conmon-31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958.scope.
Jan 21 09:00:02 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:00:02 np0005590528 podman[209441]: 2026-01-21 14:00:02.19113869 +0000 UTC m=+0.108376550 container init 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:00:02 np0005590528 podman[209441]: 2026-01-21 14:00:02.199074772 +0000 UTC m=+0.116312602 container start 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 09:00:02 np0005590528 podman[209441]: 2026-01-21 14:00:02.103877263 +0000 UTC m=+0.021115123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:00:02 np0005590528 podman[209441]: 2026-01-21 14:00:02.203633114 +0000 UTC m=+0.120870944 container attach 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 09:00:02 np0005590528 tender_wu[209486]: 167 167
Jan 21 09:00:02 np0005590528 systemd[1]: libpod-31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958.scope: Deactivated successfully.
Jan 21 09:00:02 np0005590528 podman[209441]: 2026-01-21 14:00:02.20639116 +0000 UTC m=+0.123629010 container died 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 09:00:02 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0158c1f9d7e8195af63308bf233b0e5c4fba6dbe7d0f343569df97a674ee92e6-merged.mount: Deactivated successfully.
Jan 21 09:00:02 np0005590528 podman[209441]: 2026-01-21 14:00:02.247737673 +0000 UTC m=+0.164975513 container remove 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 09:00:02 np0005590528 systemd[1]: libpod-conmon-31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958.scope: Deactivated successfully.
Jan 21 09:00:02 np0005590528 podman[209544]: 2026-01-21 14:00:02.399183246 +0000 UTC m=+0.046371675 container create cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 09:00:02 np0005590528 python3.9[209536]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:02 np0005590528 systemd[1]: Started libpod-conmon-cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3.scope.
Jan 21 09:00:02 np0005590528 podman[209544]: 2026-01-21 14:00:02.376980308 +0000 UTC m=+0.024168757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:00:02 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:00:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab0539bf06042ba658bf31d516618844bab3549cd0972d364b4a78beba41765/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab0539bf06042ba658bf31d516618844bab3549cd0972d364b4a78beba41765/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab0539bf06042ba658bf31d516618844bab3549cd0972d364b4a78beba41765/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab0539bf06042ba658bf31d516618844bab3549cd0972d364b4a78beba41765/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:02 np0005590528 podman[209544]: 2026-01-21 14:00:02.502650986 +0000 UTC m=+0.149839425 container init cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:00:02 np0005590528 podman[209544]: 2026-01-21 14:00:02.510150189 +0000 UTC m=+0.157338608 container start cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 09:00:02 np0005590528 podman[209544]: 2026-01-21 14:00:02.513742636 +0000 UTC m=+0.160931045 container attach cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 09:00:02 np0005590528 busy_fermi[209561]: {
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:    "0": [
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:        {
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "devices": [
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "/dev/loop3"
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            ],
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_name": "ceph_lv0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_size": "21470642176",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "name": "ceph_lv0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "tags": {
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.cluster_name": "ceph",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.crush_device_class": "",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.encrypted": "0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.objectstore": "bluestore",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.osd_id": "0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.type": "block",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.vdo": "0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.with_tpm": "0"
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            },
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "type": "block",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "vg_name": "ceph_vg0"
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:        }
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:    ],
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:    "1": [
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:        {
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "devices": [
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "/dev/loop4"
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            ],
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_name": "ceph_lv1",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_size": "21470642176",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "name": "ceph_lv1",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "tags": {
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.cluster_name": "ceph",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.crush_device_class": "",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.encrypted": "0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.objectstore": "bluestore",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.osd_id": "1",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.type": "block",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.vdo": "0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.with_tpm": "0"
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            },
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "type": "block",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "vg_name": "ceph_vg1"
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:        }
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:    ],
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:    "2": [
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:        {
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "devices": [
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "/dev/loop5"
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            ],
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_name": "ceph_lv2",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_size": "21470642176",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "name": "ceph_lv2",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "tags": {
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.cluster_name": "ceph",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.crush_device_class": "",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.encrypted": "0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.objectstore": "bluestore",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.osd_id": "2",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.type": "block",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.vdo": "0",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:                "ceph.with_tpm": "0"
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            },
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "type": "block",
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:            "vg_name": "ceph_vg2"
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:        }
Jan 21 09:00:02 np0005590528 busy_fermi[209561]:    ]
Jan 21 09:00:02 np0005590528 busy_fermi[209561]: }
Jan 21 09:00:02 np0005590528 systemd[1]: libpod-cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3.scope: Deactivated successfully.
Jan 21 09:00:02 np0005590528 podman[209544]: 2026-01-21 14:00:02.83505332 +0000 UTC m=+0.482241729 container died cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 09:00:02 np0005590528 systemd[1]: var-lib-containers-storage-overlay-5ab0539bf06042ba658bf31d516618844bab3549cd0972d364b4a78beba41765-merged.mount: Deactivated successfully.
Jan 21 09:00:02 np0005590528 podman[209544]: 2026-01-21 14:00:02.876861494 +0000 UTC m=+0.524049893 container remove cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 09:00:02 np0005590528 systemd[1]: libpod-conmon-cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3.scope: Deactivated successfully.
Jan 21 09:00:03 np0005590528 python3.9[209750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:03 np0005590528 podman[209819]: 2026-01-21 14:00:03.263923183 +0000 UTC m=+0.020633562 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:00:03 np0005590528 python3.9[209932]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004002.643615-1191-82189255848235/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:04 np0005590528 python3.9[210084]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:04 np0005590528 podman[209819]: 2026-01-21 14:00:04.693805918 +0000 UTC m=+1.450516317 container create da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 09:00:04 np0005590528 systemd[1]: Started libpod-conmon-da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735.scope.
Jan 21 09:00:04 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:00:04 np0005590528 podman[209819]: 2026-01-21 14:00:04.806742398 +0000 UTC m=+1.563452757 container init da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:00:04 np0005590528 podman[209819]: 2026-01-21 14:00:04.813913782 +0000 UTC m=+1.570624141 container start da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:00:04 np0005590528 podman[209819]: 2026-01-21 14:00:04.817789166 +0000 UTC m=+1.574499555 container attach da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:00:04 np0005590528 wonderful_almeida[210125]: 167 167
Jan 21 09:00:04 np0005590528 systemd[1]: libpod-da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735.scope: Deactivated successfully.
Jan 21 09:00:04 np0005590528 podman[209819]: 2026-01-21 14:00:04.819452806 +0000 UTC m=+1.576163165 container died da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 09:00:04 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c4d3263bf935fea339234e4e0f5f88ec0c6ed56bae36d05edea5fd44c4951d27-merged.mount: Deactivated successfully.
Jan 21 09:00:04 np0005590528 podman[209819]: 2026-01-21 14:00:04.901805974 +0000 UTC m=+1.658516333 container remove da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:00:04 np0005590528 systemd[1]: libpod-conmon-da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735.scope: Deactivated successfully.
Jan 21 09:00:05 np0005590528 podman[210259]: 2026-01-21 14:00:05.066785446 +0000 UTC m=+0.025779777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:00:05 np0005590528 podman[210259]: 2026-01-21 14:00:05.252280385 +0000 UTC m=+0.211274706 container create 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:00:05 np0005590528 systemd[1]: Started libpod-conmon-76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f.scope.
Jan 21 09:00:05 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:00:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3143cfd564c008493c89118ae422f60ae3be19134b42e9bfdd696da06a808a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3143cfd564c008493c89118ae422f60ae3be19134b42e9bfdd696da06a808a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3143cfd564c008493c89118ae422f60ae3be19134b42e9bfdd696da06a808a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3143cfd564c008493c89118ae422f60ae3be19134b42e9bfdd696da06a808a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:00:05 np0005590528 python3.9[210277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:05 np0005590528 python3.9[210360]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:06 np0005590528 python3.9[210512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:07 np0005590528 python3.9[210590]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2m4smt3m recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:07 np0005590528 python3.9[210752]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:08 np0005590528 podman[210259]: 2026-01-21 14:00:08.236014643 +0000 UTC m=+3.195009024 container init 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 09:00:08 np0005590528 podman[210259]: 2026-01-21 14:00:08.249137591 +0000 UTC m=+3.208131872 container start 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:00:08 np0005590528 python3.9[210830]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:08 np0005590528 podman[210259]: 2026-01-21 14:00:08.458008329 +0000 UTC m=+3.417002630 container attach 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:00:08 np0005590528 podman[210591]: 2026-01-21 14:00:08.581466533 +0000 UTC m=+1.386294209 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 21 09:00:08 np0005590528 lvm[211062]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:00:08 np0005590528 lvm[211062]: VG ceph_vg0 finished
Jan 21 09:00:08 np0005590528 lvm[211065]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:00:08 np0005590528 lvm[211065]: VG ceph_vg1 finished
Jan 21 09:00:09 np0005590528 python3.9[211053]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:00:09 np0005590528 lvm[211067]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:00:09 np0005590528 lvm[211067]: VG ceph_vg2 finished
Jan 21 09:00:09 np0005590528 bold_mclaren[210280]: {}
Jan 21 09:00:09 np0005590528 podman[210259]: 2026-01-21 14:00:09.138349881 +0000 UTC m=+4.097344162 container died 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:00:09 np0005590528 systemd[1]: libpod-76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f.scope: Deactivated successfully.
Jan 21 09:00:09 np0005590528 systemd[1]: libpod-76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f.scope: Consumed 1.388s CPU time.
Jan 21 09:00:09 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0f3143cfd564c008493c89118ae422f60ae3be19134b42e9bfdd696da06a808a-merged.mount: Deactivated successfully.
Jan 21 09:00:09 np0005590528 podman[210259]: 2026-01-21 14:00:09.245761037 +0000 UTC m=+4.204755318 container remove 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:00:09 np0005590528 systemd[1]: libpod-conmon-76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f.scope: Deactivated successfully.
Jan 21 09:00:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:00:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:10 np0005590528 python3[211234]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 09:00:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:00:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:00:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:00:10 np0005590528 python3.9[211411]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:00:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:00:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:00:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:00:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:00:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:00:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:00:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:00:11 np0005590528 python3.9[211489]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:12 np0005590528 python3.9[211641]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:12 np0005590528 python3.9[211766]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769004011.6578555-1280-6280688022404/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:13 np0005590528 python3.9[211918]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:13 np0005590528 python3.9[211996]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:14 np0005590528 python3.9[212148]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:15 np0005590528 python3.9[212226]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:15 np0005590528 python3.9[212378]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:16 np0005590528 python3.9[212503]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769004015.248628-1319-196006270225572/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:17 np0005590528 python3.9[212655]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:17 np0005590528 python3.9[212807]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:00:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:18 np0005590528 python3.9[212962]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:19 np0005590528 python3.9[213114]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:00:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:20 np0005590528 python3.9[213267]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:00:20 np0005590528 python3.9[213421]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:00:21 np0005590528 python3.9[213576]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:22 np0005590528 python3.9[213728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:23 np0005590528 python3.9[213851]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004021.9249754-1391-248770487732909/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:23 np0005590528 python3.9[214003]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:24 np0005590528 python3.9[214126]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004023.2057672-1406-126552002965037/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:25 np0005590528 python3.9[214278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:25 np0005590528 python3.9[214401]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004024.5823274-1421-128822425754455/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:26 np0005590528 python3.9[214553]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:00:26 np0005590528 systemd[1]: Reloading.
Jan 21 09:00:26 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:00:26 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:00:27 np0005590528 systemd[1]: Reached target edpm_libvirt.target.
Jan 21 09:00:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:28 np0005590528 python3.9[214745]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 09:00:28 np0005590528 systemd[1]: Reloading.
Jan 21 09:00:28 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:00:28 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:00:28 np0005590528 systemd[1]: Reloading.
Jan 21 09:00:28 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:00:28 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:00:29 np0005590528 systemd[1]: session-49.scope: Deactivated successfully.
Jan 21 09:00:29 np0005590528 systemd[1]: session-49.scope: Consumed 3min 38.078s CPU time.
Jan 21 09:00:29 np0005590528 systemd-logind[780]: Session 49 logged out. Waiting for processes to exit.
Jan 21 09:00:29 np0005590528 systemd-logind[780]: Removed session 49.
Jan 21 09:00:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:32 np0005590528 podman[214841]: 2026-01-21 14:00:32.39542782 +0000 UTC m=+0.111700411 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 09:00:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:00:33.889 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:00:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:00:33.890 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:00:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:00:33.890 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:00:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:38 np0005590528 systemd-logind[780]: New session 50 of user zuul.
Jan 21 09:00:38 np0005590528 systemd[1]: Started Session 50 of User zuul.
Jan 21 09:00:38 np0005590528 podman[214870]: 2026-01-21 14:00:38.745306314 +0000 UTC m=+0.085710310 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 21 09:00:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:00:39
Jan 21 09:00:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:00:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:00:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 21 09:00:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:00:39 np0005590528 python3.9[215042]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 09:00:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:00:41 np0005590528 python3.9[215196]: ansible-ansible.builtin.service_facts Invoked
Jan 21 09:00:41 np0005590528 network[215213]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 09:00:41 np0005590528 network[215214]: 'network-scripts' will be removed from distribution in near future.
Jan 21 09:00:41 np0005590528 network[215215]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 09:00:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:46 np0005590528 python3.9[215487]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 09:00:47 np0005590528 python3.9[215571]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 09:00:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:00:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:00:51 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:54 np0005590528 python3.9[215724]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:00:55 np0005590528 python3.9[215876]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:00:55 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:56 np0005590528 python3.9[216029]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:00:56 np0005590528 python3.9[216181]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:00:57 np0005590528 python3.9[216334]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:00:57 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:00:58 np0005590528 python3.9[216457]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004057.167376-90-82637160701717/.source.iscsi _original_basename=.2na6g9u2 follow=False checksum=2be7a59c6f4b810a0d44607c773617b0c858b872 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:00:59 np0005590528 python3.9[216609]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:00:59 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:00 np0005590528 python3.9[216761]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:01 np0005590528 python3.9[216913]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:01:01 np0005590528 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 21 09:01:01 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:02 np0005590528 python3.9[217084]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:01:02 np0005590528 systemd[1]: Reloading.
Jan 21 09:01:02 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:01:02 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:01:02 np0005590528 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 21 09:01:02 np0005590528 systemd[1]: Starting Open-iSCSI...
Jan 21 09:01:02 np0005590528 kernel: Loading iSCSI transport class v2.0-870.
Jan 21 09:01:02 np0005590528 systemd[1]: Started Open-iSCSI.
Jan 21 09:01:02 np0005590528 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 21 09:01:02 np0005590528 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 21 09:01:02 np0005590528 podman[217124]: 2026-01-21 14:01:02.676448192 +0000 UTC m=+0.130489530 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 09:01:03 np0005590528 python3.9[217311]: ansible-ansible.builtin.service_facts Invoked
Jan 21 09:01:03 np0005590528 network[217328]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 09:01:03 np0005590528 network[217329]: 'network-scripts' will be removed from distribution in near future.
Jan 21 09:01:03 np0005590528 network[217330]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 09:01:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:03 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:05 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:07 np0005590528 python3.9[217602]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 09:01:07 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:09 np0005590528 podman[217606]: 2026-01-21 14:01:09.347396211 +0000 UTC m=+0.061754707 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 09:01:09 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:10 np0005590528 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 09:01:10 np0005590528 systemd[1]: Starting man-db-cache-update.service...
Jan 21 09:01:10 np0005590528 systemd[1]: Reloading.
Jan 21 09:01:10 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:01:10 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:01:10 np0005590528 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 09:01:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:01:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:01:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:01:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:01:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:01:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:01:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:01:11 np0005590528 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 09:01:11 np0005590528 systemd[1]: Finished man-db-cache-update.service.
Jan 21 09:01:11 np0005590528 systemd[1]: run-rf844a7a045794c09ba223cd02586527b.service: Deactivated successfully.
Jan 21 09:01:11 np0005590528 podman[217930]: 2026-01-21 14:01:11.928542316 +0000 UTC m=+0.121553875 container create 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 09:01:11 np0005590528 podman[217930]: 2026-01-21 14:01:11.835640532 +0000 UTC m=+0.028652121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:01:11 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:12 np0005590528 python3.9[218095]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 21 09:01:13 np0005590528 systemd[1]: Started libpod-conmon-060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956.scope.
Jan 21 09:01:13 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:01:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:01:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:01:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:01:13 np0005590528 podman[217930]: 2026-01-21 14:01:13.212754291 +0000 UTC m=+1.405765890 container init 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 09:01:13 np0005590528 podman[217930]: 2026-01-21 14:01:13.227265581 +0000 UTC m=+1.420277170 container start 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:01:13 np0005590528 crazy_noyce[218175]: 167 167
Jan 21 09:01:13 np0005590528 systemd[1]: libpod-060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956.scope: Deactivated successfully.
Jan 21 09:01:13 np0005590528 python3.9[218264]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 21 09:01:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:13 np0005590528 podman[217930]: 2026-01-21 14:01:13.887614567 +0000 UTC m=+2.080626126 container attach 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:01:13 np0005590528 podman[217930]: 2026-01-21 14:01:13.889233275 +0000 UTC m=+2.082244884 container died 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 09:01:13 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:14 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1b503a0e98cca48a5ee24ebf83442940b0a1cab11e3fa9dd71ef41c591e0bd55-merged.mount: Deactivated successfully.
Jan 21 09:01:14 np0005590528 python3.9[218420]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:01:14 np0005590528 podman[217930]: 2026-01-21 14:01:14.652739234 +0000 UTC m=+2.845750783 container remove 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 09:01:14 np0005590528 systemd[1]: libpod-conmon-060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956.scope: Deactivated successfully.
Jan 21 09:01:14 np0005590528 podman[218547]: 2026-01-21 14:01:14.834680492 +0000 UTC m=+0.025141736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:01:15 np0005590528 python3.9[218562]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004073.8738213-178-71466403781782/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:15 np0005590528 podman[218547]: 2026-01-21 14:01:15.080481324 +0000 UTC m=+0.270942558 container create 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 09:01:15 np0005590528 systemd[1]: Started libpod-conmon-3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d.scope.
Jan 21 09:01:15 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:01:15 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:15 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:15 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:15 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:15 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:15 np0005590528 podman[218547]: 2026-01-21 14:01:15.613274943 +0000 UTC m=+0.803736177 container init 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:01:15 np0005590528 podman[218547]: 2026-01-21 14:01:15.624683887 +0000 UTC m=+0.815145121 container start 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:01:15 np0005590528 python3.9[218722]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:15 np0005590528 podman[218547]: 2026-01-21 14:01:15.805966168 +0000 UTC m=+0.996427402 container attach 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 09:01:15 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:16 np0005590528 competent_pare[218592]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:01:16 np0005590528 competent_pare[218592]: --> All data devices are unavailable
Jan 21 09:01:16 np0005590528 systemd[1]: libpod-3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d.scope: Deactivated successfully.
Jan 21 09:01:16 np0005590528 podman[218547]: 2026-01-21 14:01:16.13315646 +0000 UTC m=+1.323617704 container died 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 09:01:16 np0005590528 systemd[1]: var-lib-containers-storage-overlay-94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3-merged.mount: Deactivated successfully.
Jan 21 09:01:16 np0005590528 podman[218547]: 2026-01-21 14:01:16.735216284 +0000 UTC m=+1.925677528 container remove 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 09:01:16 np0005590528 systemd[1]: libpod-conmon-3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d.scope: Deactivated successfully.
Jan 21 09:01:16 np0005590528 python3.9[218901]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 09:01:17 np0005590528 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 21 09:01:17 np0005590528 systemd[1]: Stopped Load Kernel Modules.
Jan 21 09:01:17 np0005590528 systemd[1]: Stopping Load Kernel Modules...
Jan 21 09:01:17 np0005590528 systemd[1]: Starting Load Kernel Modules...
Jan 21 09:01:17 np0005590528 systemd[1]: Finished Load Kernel Modules.
Jan 21 09:01:17 np0005590528 podman[218969]: 2026-01-21 14:01:17.138076506 +0000 UTC m=+0.038279922 container create fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:01:17 np0005590528 podman[218969]: 2026-01-21 14:01:17.120418831 +0000 UTC m=+0.020622267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:01:17 np0005590528 systemd[1]: Started libpod-conmon-fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15.scope.
Jan 21 09:01:17 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:01:17 np0005590528 podman[218969]: 2026-01-21 14:01:17.353298754 +0000 UTC m=+0.253502200 container init fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:01:17 np0005590528 podman[218969]: 2026-01-21 14:01:17.360341703 +0000 UTC m=+0.260545119 container start fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:01:17 np0005590528 podman[218969]: 2026-01-21 14:01:17.364498114 +0000 UTC m=+0.264701520 container attach fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 09:01:17 np0005590528 nifty_bassi[219060]: 167 167
Jan 21 09:01:17 np0005590528 systemd[1]: libpod-fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15.scope: Deactivated successfully.
Jan 21 09:01:17 np0005590528 podman[218969]: 2026-01-21 14:01:17.366974872 +0000 UTC m=+0.267178288 container died fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:01:17 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b28862fcb92af11911d6672c99077946be1f536e2bfbdf66e0a946fd511586a7-merged.mount: Deactivated successfully.
Jan 21 09:01:17 np0005590528 podman[218969]: 2026-01-21 14:01:17.41675507 +0000 UTC m=+0.316958496 container remove fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:01:17 np0005590528 systemd[1]: libpod-conmon-fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15.scope: Deactivated successfully.
Jan 21 09:01:17 np0005590528 podman[219161]: 2026-01-21 14:01:17.597829946 +0000 UTC m=+0.043333823 container create 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 09:01:17 np0005590528 systemd[1]: Started libpod-conmon-94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0.scope.
Jan 21 09:01:17 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:01:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743d20bf0c74755d8508925d1f7aff13aa9e730ae97d1916e0d9132493fd1ab9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743d20bf0c74755d8508925d1f7aff13aa9e730ae97d1916e0d9132493fd1ab9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743d20bf0c74755d8508925d1f7aff13aa9e730ae97d1916e0d9132493fd1ab9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743d20bf0c74755d8508925d1f7aff13aa9e730ae97d1916e0d9132493fd1ab9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:17 np0005590528 podman[219161]: 2026-01-21 14:01:17.580375047 +0000 UTC m=+0.025878954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:01:17 np0005590528 podman[219161]: 2026-01-21 14:01:17.677524604 +0000 UTC m=+0.123028521 container init 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 21 09:01:17 np0005590528 podman[219161]: 2026-01-21 14:01:17.689453351 +0000 UTC m=+0.134957228 container start 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:01:17 np0005590528 podman[219161]: 2026-01-21 14:01:17.703399967 +0000 UTC m=+0.148903884 container attach 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:01:17 np0005590528 python3.9[219156]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:01:17 np0005590528 confident_swartz[219178]: {
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:    "0": [
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:        {
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "devices": [
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "/dev/loop3"
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            ],
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_name": "ceph_lv0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_size": "21470642176",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "name": "ceph_lv0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "tags": {
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.cluster_name": "ceph",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.crush_device_class": "",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.encrypted": "0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.objectstore": "bluestore",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.osd_id": "0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.type": "block",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.vdo": "0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.with_tpm": "0"
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            },
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "type": "block",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "vg_name": "ceph_vg0"
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:        }
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:    ],
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:    "1": [
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:        {
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "devices": [
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "/dev/loop4"
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            ],
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_name": "ceph_lv1",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_size": "21470642176",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "name": "ceph_lv1",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "tags": {
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.cluster_name": "ceph",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.crush_device_class": "",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.encrypted": "0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.objectstore": "bluestore",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.osd_id": "1",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.type": "block",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.vdo": "0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.with_tpm": "0"
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            },
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "type": "block",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "vg_name": "ceph_vg1"
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:        }
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:    ],
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:    "2": [
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:        {
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "devices": [
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "/dev/loop5"
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            ],
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_name": "ceph_lv2",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_size": "21470642176",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "name": "ceph_lv2",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "tags": {
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.cluster_name": "ceph",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.crush_device_class": "",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.encrypted": "0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.objectstore": "bluestore",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.osd_id": "2",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.type": "block",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.vdo": "0",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:                "ceph.with_tpm": "0"
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            },
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "type": "block",
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:            "vg_name": "ceph_vg2"
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:        }
Jan 21 09:01:17 np0005590528 confident_swartz[219178]:    ]
Jan 21 09:01:17 np0005590528 confident_swartz[219178]: }
Jan 21 09:01:17 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:17 np0005590528 systemd[1]: libpod-94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0.scope: Deactivated successfully.
Jan 21 09:01:18 np0005590528 podman[219161]: 2026-01-21 14:01:17.999899019 +0000 UTC m=+0.445402896 container died 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:01:18 np0005590528 systemd[1]: var-lib-containers-storage-overlay-743d20bf0c74755d8508925d1f7aff13aa9e730ae97d1916e0d9132493fd1ab9-merged.mount: Deactivated successfully.
Jan 21 09:01:18 np0005590528 podman[219161]: 2026-01-21 14:01:18.102022156 +0000 UTC m=+0.547526033 container remove 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 09:01:18 np0005590528 systemd[1]: libpod-conmon-94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0.scope: Deactivated successfully.
Jan 21 09:01:18 np0005590528 python3.9[219401]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:01:18 np0005590528 podman[219413]: 2026-01-21 14:01:18.539389959 +0000 UTC m=+0.046176292 container create 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:01:18 np0005590528 systemd[1]: Started libpod-conmon-99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814.scope.
Jan 21 09:01:18 np0005590528 podman[219413]: 2026-01-21 14:01:18.515086143 +0000 UTC m=+0.021872496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:01:18 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:01:18 np0005590528 podman[219413]: 2026-01-21 14:01:18.64417616 +0000 UTC m=+0.150962523 container init 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 09:01:18 np0005590528 podman[219413]: 2026-01-21 14:01:18.657159132 +0000 UTC m=+0.163945475 container start 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 09:01:18 np0005590528 podman[219413]: 2026-01-21 14:01:18.660918022 +0000 UTC m=+0.167704355 container attach 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 09:01:18 np0005590528 nervous_aryabhata[219453]: 167 167
Jan 21 09:01:18 np0005590528 systemd[1]: libpod-99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814.scope: Deactivated successfully.
Jan 21 09:01:18 np0005590528 podman[219413]: 2026-01-21 14:01:18.663445423 +0000 UTC m=+0.170231756 container died 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 09:01:18 np0005590528 systemd[1]: var-lib-containers-storage-overlay-89de339261dbc5f2534d9e7f8ee84b177c7a9c8b5053fdab32c8b4286cc2c4f9-merged.mount: Deactivated successfully.
Jan 21 09:01:18 np0005590528 podman[219413]: 2026-01-21 14:01:18.715790833 +0000 UTC m=+0.222577206 container remove 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 09:01:18 np0005590528 systemd[1]: libpod-conmon-99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814.scope: Deactivated successfully.
Jan 21 09:01:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:18 np0005590528 podman[219535]: 2026-01-21 14:01:18.891453208 +0000 UTC m=+0.045682689 container create 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:01:18 np0005590528 systemd[1]: Started libpod-conmon-4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42.scope.
Jan 21 09:01:18 np0005590528 podman[219535]: 2026-01-21 14:01:18.869804738 +0000 UTC m=+0.024034229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:01:18 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:01:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7936bbb628ef5aa00105e804c97196c32ca345c6f951dfb27f022a69bb814c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7936bbb628ef5aa00105e804c97196c32ca345c6f951dfb27f022a69bb814c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7936bbb628ef5aa00105e804c97196c32ca345c6f951dfb27f022a69bb814c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:18 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7936bbb628ef5aa00105e804c97196c32ca345c6f951dfb27f022a69bb814c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:01:18 np0005590528 podman[219535]: 2026-01-21 14:01:18.983722238 +0000 UTC m=+0.137951719 container init 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 09:01:18 np0005590528 podman[219535]: 2026-01-21 14:01:18.991360971 +0000 UTC m=+0.145590422 container start 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 09:01:18 np0005590528 podman[219535]: 2026-01-21 14:01:18.995499401 +0000 UTC m=+0.149728862 container attach 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 09:01:19 np0005590528 python3.9[219625]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:01:19 np0005590528 lvm[219822]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:01:19 np0005590528 lvm[219822]: VG ceph_vg0 finished
Jan 21 09:01:19 np0005590528 lvm[219823]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:01:19 np0005590528 lvm[219823]: VG ceph_vg1 finished
Jan 21 09:01:19 np0005590528 lvm[219825]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:01:19 np0005590528 lvm[219825]: VG ceph_vg2 finished
Jan 21 09:01:19 np0005590528 python3.9[219807]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004078.7337143-229-55553962586106/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:19 np0005590528 cool_tesla[219592]: {}
Jan 21 09:01:19 np0005590528 systemd[1]: libpod-4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42.scope: Deactivated successfully.
Jan 21 09:01:19 np0005590528 podman[219535]: 2026-01-21 14:01:19.782570017 +0000 UTC m=+0.936799488 container died 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 09:01:19 np0005590528 systemd[1]: libpod-4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42.scope: Consumed 1.308s CPU time.
Jan 21 09:01:19 np0005590528 systemd[1]: var-lib-containers-storage-overlay-f7936bbb628ef5aa00105e804c97196c32ca345c6f951dfb27f022a69bb814c7-merged.mount: Deactivated successfully.
Jan 21 09:01:19 np0005590528 podman[219535]: 2026-01-21 14:01:19.82553517 +0000 UTC m=+0.979764631 container remove 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 09:01:19 np0005590528 systemd[1]: libpod-conmon-4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42.scope: Deactivated successfully.
Jan 21 09:01:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:01:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:01:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:01:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:01:19 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:20 np0005590528 python3.9[220014]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:01:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:01:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:01:20 np0005590528 python3.9[220167]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:21 np0005590528 python3.9[220319]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:21 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:22 np0005590528 python3.9[220471]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:23 np0005590528 python3.9[220623]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:23 np0005590528 python3.9[220775]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:23 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:24 np0005590528 python3.9[220927]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:24 np0005590528 python3.9[221079]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:25 np0005590528 python3.9[221231]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:01:25 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:26 np0005590528 python3.9[221385]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:01:26 np0005590528 python3.9[221538]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:01:26 np0005590528 systemd[1]: Listening on multipathd control socket.
Jan 21 09:01:27 np0005590528 python3.9[221694]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:01:27 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:28 np0005590528 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 21 09:01:28 np0005590528 udevadm[221699]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 21 09:01:28 np0005590528 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 21 09:01:28 np0005590528 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 21 09:01:28 np0005590528 multipathd[221703]: --------start up--------
Jan 21 09:01:28 np0005590528 multipathd[221703]: read /etc/multipath.conf
Jan 21 09:01:28 np0005590528 multipathd[221703]: path checkers start up
Jan 21 09:01:29 np0005590528 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 21 09:01:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:29 np0005590528 python3.9[221862]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 21 09:01:29 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:30 np0005590528 python3.9[222014]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 21 09:01:30 np0005590528 kernel: Key type psk registered
Jan 21 09:01:31 np0005590528 python3.9[222177]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:01:31 np0005590528 python3.9[222300]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004090.7469246-359-189038850835010/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:31 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:32 np0005590528 python3.9[222452]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:33 np0005590528 podman[222576]: 2026-01-21 14:01:33.205092092 +0000 UTC m=+0.137619022 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 21 09:01:33 np0005590528 python3.9[222618]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 09:01:33 np0005590528 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 21 09:01:33 np0005590528 systemd[1]: Stopped Load Kernel Modules.
Jan 21 09:01:33 np0005590528 systemd[1]: Stopping Load Kernel Modules...
Jan 21 09:01:33 np0005590528 systemd[1]: Starting Load Kernel Modules...
Jan 21 09:01:33 np0005590528 systemd[1]: Finished Load Kernel Modules.
Jan 21 09:01:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:01:33.891 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:01:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:01:33.891 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:01:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:01:33.892 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:01:33 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 21 09:01:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:34 np0005590528 python3.9[222783]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 09:01:35 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 09:01:37 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 09:01:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:01:39
Jan 21 09:01:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:01:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:01:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'vms', '.rgw.root', 'default.rgw.control']
Jan 21 09:01:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:01:39 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 09:01:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:40 np0005590528 podman[222788]: 2026-01-21 14:01:40.396194026 +0000 UTC m=+0.112394924 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:01:41 np0005590528 systemd[1]: Reloading.
Jan 21 09:01:41 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:01:41 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:01:41 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 09:01:42 np0005590528 systemd[1]: Reloading.
Jan 21 09:01:42 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:01:42 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:01:42 np0005590528 systemd-logind[780]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 21 09:01:42 np0005590528 lvm[222917]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:01:42 np0005590528 lvm[222917]: VG ceph_vg2 finished
Jan 21 09:01:42 np0005590528 lvm[222915]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:01:42 np0005590528 lvm[222915]: VG ceph_vg0 finished
Jan 21 09:01:42 np0005590528 lvm[222918]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:01:42 np0005590528 lvm[222918]: VG ceph_vg1 finished
Jan 21 09:01:42 np0005590528 systemd-logind[780]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 21 09:01:43 np0005590528 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 09:01:43 np0005590528 systemd[1]: Starting man-db-cache-update.service...
Jan 21 09:01:43 np0005590528 systemd[1]: Reloading.
Jan 21 09:01:43 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:01:43 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:01:43 np0005590528 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 09:01:43 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 09:01:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:45 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 21 09:01:47 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:48 np0005590528 python3.9[224274]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 09:01:48 np0005590528 iscsid[217126]: iscsid shutting down.
Jan 21 09:01:48 np0005590528 systemd[1]: Stopping Open-iSCSI...
Jan 21 09:01:48 np0005590528 systemd[1]: iscsid.service: Deactivated successfully.
Jan 21 09:01:48 np0005590528 systemd[1]: Stopped Open-iSCSI.
Jan 21 09:01:48 np0005590528 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 21 09:01:48 np0005590528 systemd[1]: Starting Open-iSCSI...
Jan 21 09:01:48 np0005590528 systemd[1]: Started Open-iSCSI.
Jan 21 09:01:48 np0005590528 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 09:01:48 np0005590528 systemd[1]: Finished man-db-cache-update.service.
Jan 21 09:01:48 np0005590528 systemd[1]: man-db-cache-update.service: Consumed 1.688s CPU time.
Jan 21 09:01:48 np0005590528 systemd[1]: run-r7ae700fde18a45f08d176fc9364d6fc6.service: Deactivated successfully.
Jan 21 09:01:49 np0005590528 python3.9[224431]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 09:01:49 np0005590528 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 21 09:01:49 np0005590528 multipathd[221703]: exit (signal)
Jan 21 09:01:49 np0005590528 multipathd[221703]: --------shut down-------
Jan 21 09:01:49 np0005590528 systemd[1]: multipathd.service: Deactivated successfully.
Jan 21 09:01:49 np0005590528 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 21 09:01:49 np0005590528 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 21 09:01:49 np0005590528 multipathd[224438]: --------start up--------
Jan 21 09:01:49 np0005590528 multipathd[224438]: read /etc/multipath.conf
Jan 21 09:01:49 np0005590528 multipathd[224438]: path checkers start up
Jan 21 09:01:49 np0005590528 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 21 09:01:49 np0005590528 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 21 09:01:49 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:50 np0005590528 python3.9[224596]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 09:01:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:01:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:01:51 np0005590528 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 21 09:01:51 np0005590528 python3.9[224753]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:01:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:52 np0005590528 python3.9[224905]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 09:01:52 np0005590528 systemd[1]: Reloading.
Jan 21 09:01:52 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:01:52 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:01:53 np0005590528 python3.9[225090]: ansible-ansible.builtin.service_facts Invoked
Jan 21 09:01:53 np0005590528 network[225107]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 09:01:53 np0005590528 network[225108]: 'network-scripts' will be removed from distribution in near future.
Jan 21 09:01:53 np0005590528 network[225109]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 09:01:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:01:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:01:58 np0005590528 python3.9[225382]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:01:58 np0005590528 python3.9[225535]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:01:59 np0005590528 python3.9[225688]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:02:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:00 np0005590528 python3.9[225841]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:02:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:01 np0005590528 python3.9[225994]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:02:01 np0005590528 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 21 09:02:01 np0005590528 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 21 09:02:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:02 np0005590528 python3.9[226149]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:02:02 np0005590528 python3.9[226302]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:02:03 np0005590528 podman[226427]: 2026-01-21 14:02:03.369871265 +0000 UTC m=+0.116787350 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 09:02:03 np0005590528 python3.9[226475]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:02:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:04 np0005590528 python3.9[226634]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:05 np0005590528 python3.9[226786]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:05 np0005590528 python3.9[226938]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.070486) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126070519, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1927, "num_deletes": 253, "total_data_size": 3337846, "memory_usage": 3380184, "flush_reason": "Manual Compaction"}
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126160936, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1869755, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11749, "largest_seqno": 13675, "table_properties": {"data_size": 1863526, "index_size": 3176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15532, "raw_average_key_size": 20, "raw_value_size": 1849779, "raw_average_value_size": 2402, "num_data_blocks": 147, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003895, "oldest_key_time": 1769003895, "file_creation_time": 1769004126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 90531 microseconds, and 5305 cpu microseconds.
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.161014) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1869755 bytes OK
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.161034) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.168642) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.168881) EVENT_LOG_v1 {"time_micros": 1769004126168871, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.168907) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3329744, prev total WAL file size 3329744, number of live WAL files 2.
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.169857) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1825KB)], [29(7893KB)]
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126169926, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9952989, "oldest_snapshot_seqno": -1}
Jan 21 09:02:06 np0005590528 python3.9[227090]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4063 keys, 7982506 bytes, temperature: kUnknown
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126241971, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7982506, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7953421, "index_size": 17839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96446, "raw_average_key_size": 23, "raw_value_size": 7878289, "raw_average_value_size": 1939, "num_data_blocks": 776, "num_entries": 4063, "num_filter_entries": 4063, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.242197) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7982506 bytes
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.363783) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.0 rd, 110.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.7 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(9.6) write-amplify(4.3) OK, records in: 4475, records dropped: 412 output_compression: NoCompression
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.363823) EVENT_LOG_v1 {"time_micros": 1769004126363806, "job": 12, "event": "compaction_finished", "compaction_time_micros": 72135, "compaction_time_cpu_micros": 17922, "output_level": 6, "num_output_files": 1, "total_output_size": 7982506, "num_input_records": 4475, "num_output_records": 4063, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126364860, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126366439, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.169703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.366536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.366540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.366542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.366543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:02:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.366545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:02:06 np0005590528 python3.9[227242]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:07 np0005590528 python3.9[227394]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:08 np0005590528 python3.9[227546]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:08 np0005590528 python3.9[227698]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:09 np0005590528 python3.9[227850]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:09 np0005590528 python3.9[228002]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:10 np0005590528 python3.9[228154]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:11 np0005590528 podman[228278]: 2026-01-21 14:02:11.003372203 +0000 UTC m=+0.052394365 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Jan 21 09:02:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:02:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:02:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:02:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:02:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:02:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:02:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:11 np0005590528 python3.9[228323]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:11 np0005590528 python3.9[228476]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:12 np0005590528 python3.9[228628]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:13 np0005590528 python3.9[228780]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:13 np0005590528 python3.9[228932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:14 np0005590528 python3.9[229084]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:02:15 np0005590528 python3.9[229236]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 09:02:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:16 np0005590528 python3.9[229388]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 09:02:16 np0005590528 systemd[1]: Reloading.
Jan 21 09:02:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:16 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:02:16 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:02:17 np0005590528 python3.9[229575]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:02:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:18 np0005590528 python3.9[229728]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:02:19 np0005590528 python3.9[229881]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:02:19 np0005590528 python3.9[230034]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:02:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:20 np0005590528 python3.9[230250]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:02:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:02:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:21 np0005590528 python3.9[230445]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:02:21 np0005590528 podman[230507]: 2026-01-21 14:02:21.336644316 +0000 UTC m=+0.026588943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:02:21 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:02:21 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:02:21 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:02:21 np0005590528 podman[230507]: 2026-01-21 14:02:21.634994237 +0000 UTC m=+0.324938834 container create c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 09:02:21 np0005590528 systemd[1]: Started libpod-conmon-c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07.scope.
Jan 21 09:02:21 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:02:21 np0005590528 python3.9[230649]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:02:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:22 np0005590528 podman[230507]: 2026-01-21 14:02:22.074157998 +0000 UTC m=+0.764102635 container init c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 09:02:22 np0005590528 podman[230507]: 2026-01-21 14:02:22.083518494 +0000 UTC m=+0.773463111 container start c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 09:02:22 np0005590528 awesome_meninsky[230652]: 167 167
Jan 21 09:02:22 np0005590528 systemd[1]: libpod-c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07.scope: Deactivated successfully.
Jan 21 09:02:22 np0005590528 podman[230507]: 2026-01-21 14:02:22.103792413 +0000 UTC m=+0.793737020 container attach c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 09:02:22 np0005590528 podman[230507]: 2026-01-21 14:02:22.104692255 +0000 UTC m=+0.794636862 container died c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:02:22 np0005590528 systemd[1]: var-lib-containers-storage-overlay-41b2384991e6da3acd3396bf2857b8bbaeb7d1290cbc4d22158693246c746866-merged.mount: Deactivated successfully.
Jan 21 09:02:22 np0005590528 podman[230507]: 2026-01-21 14:02:22.432770904 +0000 UTC m=+1.122715501 container remove c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:02:22 np0005590528 systemd[1]: libpod-conmon-c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07.scope: Deactivated successfully.
Jan 21 09:02:22 np0005590528 python3.9[230822]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 09:02:22 np0005590528 podman[230829]: 2026-01-21 14:02:22.598884853 +0000 UTC m=+0.043001069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:02:22 np0005590528 podman[230829]: 2026-01-21 14:02:22.836210561 +0000 UTC m=+0.280326747 container create 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 09:02:22 np0005590528 systemd[1]: Started libpod-conmon-485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15.scope.
Jan 21 09:02:22 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:02:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:23 np0005590528 podman[230829]: 2026-01-21 14:02:23.11330345 +0000 UTC m=+0.557419656 container init 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 21 09:02:23 np0005590528 podman[230829]: 2026-01-21 14:02:23.120496763 +0000 UTC m=+0.564612949 container start 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:02:23 np0005590528 podman[230829]: 2026-01-21 14:02:23.17296647 +0000 UTC m=+0.617082666 container attach 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 09:02:23 np0005590528 inspiring_wright[230870]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:02:23 np0005590528 inspiring_wright[230870]: --> All data devices are unavailable
Jan 21 09:02:23 np0005590528 systemd[1]: libpod-485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15.scope: Deactivated successfully.
Jan 21 09:02:23 np0005590528 podman[230829]: 2026-01-21 14:02:23.614823184 +0000 UTC m=+1.058939380 container died 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 09:02:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:24 np0005590528 python3.9[231029]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:24 np0005590528 python3.9[231181]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:25 np0005590528 python3.9[231334]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:25 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0-merged.mount: Deactivated successfully.
Jan 21 09:02:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:26 np0005590528 python3.9[231486]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:26 np0005590528 python3.9[231638]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:27 np0005590528 podman[230829]: 2026-01-21 14:02:27.358140116 +0000 UTC m=+4.802256342 container remove 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 09:02:27 np0005590528 systemd[1]: libpod-conmon-485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15.scope: Deactivated successfully.
Jan 21 09:02:27 np0005590528 python3.9[231790]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:27 np0005590528 podman[231929]: 2026-01-21 14:02:27.811682313 +0000 UTC m=+0.024730528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:02:27 np0005590528 podman[231929]: 2026-01-21 14:02:27.94699743 +0000 UTC m=+0.160045625 container create 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:02:28 np0005590528 systemd[1]: Started libpod-conmon-371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947.scope.
Jan 21 09:02:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:28 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:02:28 np0005590528 podman[231929]: 2026-01-21 14:02:28.064818034 +0000 UTC m=+0.277866249 container init 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 09:02:28 np0005590528 podman[231929]: 2026-01-21 14:02:28.072063528 +0000 UTC m=+0.285111723 container start 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 09:02:28 np0005590528 stoic_moser[232021]: 167 167
Jan 21 09:02:28 np0005590528 systemd[1]: libpod-371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947.scope: Deactivated successfully.
Jan 21 09:02:28 np0005590528 podman[231929]: 2026-01-21 14:02:28.129872553 +0000 UTC m=+0.342920748 container attach 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 09:02:28 np0005590528 podman[231929]: 2026-01-21 14:02:28.130442508 +0000 UTC m=+0.343490703 container died 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 09:02:28 np0005590528 python3.9[232023]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:28 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d0c1168f56506f6d806d758d6a4f76803f993f05c98565b04bccba38fe487d72-merged.mount: Deactivated successfully.
Jan 21 09:02:28 np0005590528 podman[231929]: 2026-01-21 14:02:28.402602736 +0000 UTC m=+0.615650931 container remove 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 09:02:28 np0005590528 systemd[1]: libpod-conmon-371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947.scope: Deactivated successfully.
Jan 21 09:02:28 np0005590528 podman[232169]: 2026-01-21 14:02:28.554095323 +0000 UTC m=+0.024751158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:02:28 np0005590528 python3.9[232210]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:29 np0005590528 podman[232169]: 2026-01-21 14:02:29.052197755 +0000 UTC m=+0.522853610 container create 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:02:29 np0005590528 systemd[1]: Started libpod-conmon-9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510.scope.
Jan 21 09:02:29 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:02:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429b27cecfebede8b297425b802c63f172ad41703158158008e2a768e05e2ab2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429b27cecfebede8b297425b802c63f172ad41703158158008e2a768e05e2ab2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429b27cecfebede8b297425b802c63f172ad41703158158008e2a768e05e2ab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429b27cecfebede8b297425b802c63f172ad41703158158008e2a768e05e2ab2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:29 np0005590528 podman[232169]: 2026-01-21 14:02:29.442626799 +0000 UTC m=+0.913282634 container init 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 09:02:29 np0005590528 podman[232169]: 2026-01-21 14:02:29.452085758 +0000 UTC m=+0.922741573 container start 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 09:02:29 np0005590528 podman[232169]: 2026-01-21 14:02:29.48659528 +0000 UTC m=+0.957251085 container attach 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 09:02:29 np0005590528 python3.9[232370]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]: {
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:    "0": [
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:        {
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "devices": [
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "/dev/loop3"
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            ],
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_name": "ceph_lv0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_size": "21470642176",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "name": "ceph_lv0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "tags": {
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.cluster_name": "ceph",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.crush_device_class": "",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.encrypted": "0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.objectstore": "bluestore",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.osd_id": "0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.type": "block",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.vdo": "0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.with_tpm": "0"
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            },
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "type": "block",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "vg_name": "ceph_vg0"
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:        }
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:    ],
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:    "1": [
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:        {
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "devices": [
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "/dev/loop4"
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            ],
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_name": "ceph_lv1",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_size": "21470642176",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "name": "ceph_lv1",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "tags": {
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.cluster_name": "ceph",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.crush_device_class": "",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.encrypted": "0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.objectstore": "bluestore",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.osd_id": "1",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.type": "block",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.vdo": "0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.with_tpm": "0"
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            },
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "type": "block",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "vg_name": "ceph_vg1"
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:        }
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:    ],
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:    "2": [
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:        {
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "devices": [
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "/dev/loop5"
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            ],
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_name": "ceph_lv2",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_size": "21470642176",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "name": "ceph_lv2",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "tags": {
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.cluster_name": "ceph",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.crush_device_class": "",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.encrypted": "0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.objectstore": "bluestore",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.osd_id": "2",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.type": "block",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.vdo": "0",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:                "ceph.with_tpm": "0"
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            },
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "type": "block",
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:            "vg_name": "ceph_vg2"
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:        }
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]:    ]
Jan 21 09:02:29 np0005590528 friendly_cartwright[232246]: }
Jan 21 09:02:29 np0005590528 systemd[1]: libpod-9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510.scope: Deactivated successfully.
Jan 21 09:02:29 np0005590528 podman[232399]: 2026-01-21 14:02:29.80023319 +0000 UTC m=+0.026688245 container died 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:02:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:30 np0005590528 systemd[1]: var-lib-containers-storage-overlay-429b27cecfebede8b297425b802c63f172ad41703158158008e2a768e05e2ab2-merged.mount: Deactivated successfully.
Jan 21 09:02:30 np0005590528 podman[232399]: 2026-01-21 14:02:30.121410453 +0000 UTC m=+0.347865468 container remove 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 09:02:30 np0005590528 systemd[1]: libpod-conmon-9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510.scope: Deactivated successfully.
Jan 21 09:02:30 np0005590528 python3.9[232538]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:30 np0005590528 podman[232624]: 2026-01-21 14:02:30.624491806 +0000 UTC m=+0.061947286 container create f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:02:30 np0005590528 systemd[1]: Started libpod-conmon-f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa.scope.
Jan 21 09:02:30 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:02:30 np0005590528 podman[232624]: 2026-01-21 14:02:30.599225816 +0000 UTC m=+0.036681266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:02:30 np0005590528 podman[232624]: 2026-01-21 14:02:30.742405312 +0000 UTC m=+0.179860782 container init f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 09:02:30 np0005590528 podman[232624]: 2026-01-21 14:02:30.748909749 +0000 UTC m=+0.186365189 container start f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:02:30 np0005590528 optimistic_turing[232640]: 167 167
Jan 21 09:02:30 np0005590528 systemd[1]: libpod-f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa.scope: Deactivated successfully.
Jan 21 09:02:30 np0005590528 podman[232624]: 2026-01-21 14:02:30.755938549 +0000 UTC m=+0.193393999 container attach f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:02:30 np0005590528 podman[232624]: 2026-01-21 14:02:30.757728012 +0000 UTC m=+0.195183482 container died f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:02:31 np0005590528 systemd[1]: var-lib-containers-storage-overlay-11ac6d5e081745b881d9092df7983a3ee78b6af1d8394af26ed238ef4c7b175a-merged.mount: Deactivated successfully.
Jan 21 09:02:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:31 np0005590528 podman[232624]: 2026-01-21 14:02:31.483862189 +0000 UTC m=+0.921317689 container remove f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 09:02:31 np0005590528 systemd[1]: libpod-conmon-f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa.scope: Deactivated successfully.
Jan 21 09:02:31 np0005590528 podman[232665]: 2026-01-21 14:02:31.655950622 +0000 UTC m=+0.025752433 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:02:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:32 np0005590528 podman[232665]: 2026-01-21 14:02:32.104794675 +0000 UTC m=+0.474596476 container create 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:02:32 np0005590528 systemd[1]: Started libpod-conmon-30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693.scope.
Jan 21 09:02:33 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:02:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ad2b38aaf19d25e671f00a9f57da134ce3d5a914b21497abe76980f16a7ab4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ad2b38aaf19d25e671f00a9f57da134ce3d5a914b21497abe76980f16a7ab4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ad2b38aaf19d25e671f00a9f57da134ce3d5a914b21497abe76980f16a7ab4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ad2b38aaf19d25e671f00a9f57da134ce3d5a914b21497abe76980f16a7ab4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:02:33 np0005590528 podman[232665]: 2026-01-21 14:02:33.046875514 +0000 UTC m=+1.416677325 container init 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:02:33 np0005590528 podman[232665]: 2026-01-21 14:02:33.0525006 +0000 UTC m=+1.422302391 container start 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:02:33 np0005590528 podman[232665]: 2026-01-21 14:02:33.273778271 +0000 UTC m=+1.643580072 container attach 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:02:33 np0005590528 lvm[232773]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:02:33 np0005590528 lvm[232769]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:02:33 np0005590528 lvm[232769]: VG ceph_vg0 finished
Jan 21 09:02:33 np0005590528 lvm[232773]: VG ceph_vg2 finished
Jan 21 09:02:33 np0005590528 lvm[232772]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:02:33 np0005590528 lvm[232772]: VG ceph_vg1 finished
Jan 21 09:02:33 np0005590528 vibrant_pascal[232681]: {}
Jan 21 09:02:33 np0005590528 podman[232756]: 2026-01-21 14:02:33.858863713 +0000 UTC m=+0.121817601 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 21 09:02:33 np0005590528 systemd[1]: libpod-30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693.scope: Deactivated successfully.
Jan 21 09:02:33 np0005590528 podman[232665]: 2026-01-21 14:02:33.881008278 +0000 UTC m=+2.250810089 container died 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:02:33 np0005590528 systemd[1]: libpod-30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693.scope: Consumed 1.279s CPU time.
Jan 21 09:02:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:02:33.892 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:02:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:02:33.892 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:02:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:02:33.893 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:02:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:34 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e0ad2b38aaf19d25e671f00a9f57da134ce3d5a914b21497abe76980f16a7ab4-merged.mount: Deactivated successfully.
Jan 21 09:02:34 np0005590528 podman[232665]: 2026-01-21 14:02:34.58629012 +0000 UTC m=+2.956091951 container remove 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 09:02:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:02:34 np0005590528 systemd[1]: libpod-conmon-30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693.scope: Deactivated successfully.
Jan 21 09:02:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:02:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:02:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:02:35 np0005590528 python3.9[232953]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 21 09:02:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:02:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:02:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:36 np0005590528 python3.9[233106]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 09:02:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:38 np0005590528 python3.9[233264]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 09:02:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:02:39
Jan 21 09:02:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:02:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:02:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'vms', 'images', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Jan 21 09:02:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:02:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:40 np0005590528 systemd-logind[780]: New session 51 of user zuul.
Jan 21 09:02:40 np0005590528 systemd[1]: Started Session 51 of User zuul.
Jan 21 09:02:40 np0005590528 systemd[1]: session-51.scope: Deactivated successfully.
Jan 21 09:02:40 np0005590528 systemd-logind[780]: Session 51 logged out. Waiting for processes to exit.
Jan 21 09:02:40 np0005590528 systemd-logind[780]: Removed session 51.
Jan 21 09:02:40 np0005590528 python3.9[233450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:02:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:02:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:41 np0005590528 podman[233545]: 2026-01-21 14:02:41.364998089 +0000 UTC m=+0.073577138 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 09:02:41 np0005590528 python3.9[233582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004160.4831302-986-271394484781077/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:42 np0005590528 python3.9[233740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:02:42 np0005590528 python3.9[233816]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:43 np0005590528 python3.9[233966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:02:43 np0005590528 python3.9[234087]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004162.7616181-986-200570131928423/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:44 np0005590528 python3.9[234237]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:02:44 np0005590528 python3.9[234358]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004163.9033308-986-123740259837830/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:45 np0005590528 python3.9[234508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:02:45 np0005590528 python3.9[234629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004165.0469036-986-253796967057318/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:46 np0005590528 python3.9[234779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:02:47 np0005590528 python3.9[234900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004166.1347642-986-139852597775498/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:47 np0005590528 python3.9[235052]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:48 np0005590528 python3.9[235204]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:02:49 np0005590528 python3.9[235356]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:02:49 np0005590528 python3.9[235508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:50 np0005590528 python3.9[235631]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769004169.3950312-1093-89801238780914/.source _original_basename=.075ictz7 follow=False checksum=11fa0e566769e53db26861537ed860b1d9835ba8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:02:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:02:51 np0005590528 python3.9[235783]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:02:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:51 np0005590528 python3.9[235935]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:02:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:52 np0005590528 python3.9[236056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004171.4475796-1119-35897265561990/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:52 np0005590528 python3.9[236206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 09:02:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:54 np0005590528 python3.9[236327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004172.5697136-1134-269264532895467/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 09:02:55 np0005590528 python3.9[236479]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 21 09:02:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:02:56 np0005590528 python3.9[236631]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 09:02:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:02:57 np0005590528 python3[236783]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 09:02:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:03:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:06 np0005590528 podman[236841]: 2026-01-21 14:03:06.102747021 +0000 UTC m=+1.819229722 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 09:03:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:03:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:03:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:03:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:03:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:03:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:03:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:03:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:03:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:03:17 np0005590528 podman[236882]: 2026-01-21 14:03:17.993452991 +0000 UTC m=+5.723851246 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 21 09:03:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:18 np0005590528 podman[236797]: 2026-01-21 14:03:18.126334387 +0000 UTC m=+20.599214217 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 09:03:18 np0005590528 podman[236924]: 2026-01-21 14:03:18.298345036 +0000 UTC m=+0.090613489 container create be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init)
Jan 21 09:03:18 np0005590528 podman[236924]: 2026-01-21 14:03:18.243653121 +0000 UTC m=+0.035921654 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 09:03:18 np0005590528 python3[236783]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 21 09:03:19 np0005590528 python3.9[237114]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:03:19 np0005590528 python3.9[237268]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 21 09:03:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:20 np0005590528 python3.9[237420]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 09:03:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:03:21 np0005590528 python3[237572]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 09:03:21 np0005590528 podman[237608]: 2026-01-21 14:03:21.748572689 +0000 UTC m=+0.055091064 container create 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 21 09:03:21 np0005590528 podman[237608]: 2026-01-21 14:03:21.71724826 +0000 UTC m=+0.023766665 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 09:03:21 np0005590528 python3[237572]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 21 09:03:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:22 np0005590528 python3.9[237798]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:03:23 np0005590528 python3.9[237952]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:03:23 np0005590528 python3.9[238103]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769004203.3996093-1230-145735232277953/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 09:03:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:24 np0005590528 python3.9[238179]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 09:03:24 np0005590528 systemd[1]: Reloading.
Jan 21 09:03:24 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:03:24 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:03:25 np0005590528 python3.9[238289]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 09:03:25 np0005590528 systemd[1]: Reloading.
Jan 21 09:03:25 np0005590528 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 09:03:25 np0005590528 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 09:03:25 np0005590528 systemd[1]: Starting nova_compute container...
Jan 21 09:03:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:26 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:03:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:26 np0005590528 podman[238328]: 2026-01-21 14:03:26.320168341 +0000 UTC m=+0.431534080 container init 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:03:26 np0005590528 podman[238328]: 2026-01-21 14:03:26.326307321 +0000 UTC m=+0.437673020 container start 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 21 09:03:26 np0005590528 nova_compute[238343]: + sudo -E kolla_set_configs
Jan 21 09:03:26 np0005590528 podman[238328]: nova_compute
Jan 21 09:03:26 np0005590528 systemd[1]: Started nova_compute container.
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Validating config file
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying service configuration files
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Deleting /etc/ceph
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Creating directory /etc/ceph
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/ceph
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Writing out command to execute
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 09:03:26 np0005590528 nova_compute[238343]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 09:03:26 np0005590528 nova_compute[238343]: ++ cat /run_command
Jan 21 09:03:26 np0005590528 nova_compute[238343]: + CMD=nova-compute
Jan 21 09:03:26 np0005590528 nova_compute[238343]: + ARGS=
Jan 21 09:03:26 np0005590528 nova_compute[238343]: + sudo kolla_copy_cacerts
Jan 21 09:03:26 np0005590528 nova_compute[238343]: + [[ ! -n '' ]]
Jan 21 09:03:26 np0005590528 nova_compute[238343]: + . kolla_extend_start
Jan 21 09:03:26 np0005590528 nova_compute[238343]: Running command: 'nova-compute'
Jan 21 09:03:26 np0005590528 nova_compute[238343]: + echo 'Running command: '\''nova-compute'\'''
Jan 21 09:03:26 np0005590528 nova_compute[238343]: + umask 0022
Jan 21 09:03:26 np0005590528 nova_compute[238343]: + exec nova-compute
Jan 21 09:03:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:03:27 np0005590528 python3.9[238504]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:03:27 np0005590528 python3.9[238655]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:03:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:28 np0005590528 nova_compute[238343]: 2026-01-21 14:03:28.702 238347 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 09:03:28 np0005590528 nova_compute[238343]: 2026-01-21 14:03:28.702 238347 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 09:03:28 np0005590528 nova_compute[238343]: 2026-01-21 14:03:28.703 238347 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 09:03:28 np0005590528 nova_compute[238343]: 2026-01-21 14:03:28.703 238347 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 21 09:03:28 np0005590528 python3.9[238805]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 09:03:28 np0005590528 nova_compute[238343]: 2026-01-21 14:03:28.842 238347 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:03:28 np0005590528 nova_compute[238343]: 2026-01-21 14:03:28.868 238347 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:03:28 np0005590528 nova_compute[238343]: 2026-01-21 14:03:28.868 238347 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.450 238347 INFO nova.virt.driver [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.581 238347 INFO nova.compute.provider_config [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.594 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.595 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.595 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.595 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.595 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.596 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.596 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.596 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.596 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.599 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.599 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.599 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.599 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.599 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.600 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.600 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.600 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.600 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.600 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.605 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.605 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.605 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.605 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.605 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.634 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.634 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.634 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.634 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.634 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.637 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.637 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.637 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.637 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.639 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.639 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.639 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.639 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.640 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.640 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.640 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.640 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.641 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.641 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.641 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.641 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.641 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.646 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.646 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.646 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.646 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.646 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.652 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.652 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.652 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.652 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.652 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.657 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.657 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.657 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.657 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.657 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.658 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.658 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.658 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.658 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.658 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.660 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.660 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.660 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.660 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.660 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.669 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.669 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.669 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.669 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.669 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.671 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.671 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.671 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.671 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.671 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.672 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.672 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.672 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.672 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.673 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.673 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.673 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.673 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.673 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.674 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.674 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.674 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.674 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.674 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.675 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.675 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.675 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.675 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.675 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.676 238347 WARNING oslo_config.cfg [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 21 09:03:29 np0005590528 nova_compute[238343]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 21 09:03:29 np0005590528 nova_compute[238343]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 21 09:03:29 np0005590528 nova_compute[238343]: and ``live_migration_inbound_addr`` respectively.
Jan 21 09:03:29 np0005590528 nova_compute[238343]: ).  Its value may be silently ignored in the future.#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.676 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.676 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.676 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.677 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.677 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.677 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.677 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.678 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.678 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.678 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.678 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.679 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.679 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.679 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.679 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.679 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rbd_secret_uuid        = 2f0e9cad-f0a3-5869-9cc3-8d84d071866a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.682 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.682 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.682 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.682 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.682 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.688 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.688 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.688 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.688 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.688 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.706 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.706 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.706 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.706 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.706 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.707 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.707 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.707 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.707 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.707 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.709 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.709 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.709 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.709 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.710 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.710 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.710 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.710 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.710 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.712 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.712 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.712 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.712 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.712 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.713 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.713 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.713 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.713 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.713 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.714 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.714 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.714 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.714 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.714 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.715 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.715 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.715 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.715 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.715 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.716 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.716 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.716 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.716 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.719 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.719 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.719 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.719 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.719 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.745 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.745 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.746 238347 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.760 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.761 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.761 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.762 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 21 09:03:29 np0005590528 systemd[1]: Starting libvirt QEMU daemon...
Jan 21 09:03:29 np0005590528 python3.9[238961]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 21 09:03:29 np0005590528 systemd[1]: Started libvirt QEMU daemon.
Jan 21 09:03:29 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 09:03:29 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.850 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f04c87b7280> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.853 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f04c87b7280> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.854 238347 INFO nova.virt.libvirt.driver [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.869 238347 WARNING nova.virt.libvirt.driver [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 21 09:03:29 np0005590528 nova_compute[238343]: 2026-01-21 14:03:29.869 238347 DEBUG nova.virt.libvirt.volume.mount [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 21 09:03:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:30 np0005590528 nova_compute[238343]: 2026-01-21 14:03:30.782 238347 INFO nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Libvirt host capabilities <capabilities>
Jan 21 09:03:30 np0005590528 nova_compute[238343]: 
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <host>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <uuid>7823760d-0166-4122-8fb2-3165351e57e7</uuid>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <cpu>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <arch>x86_64</arch>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model>EPYC-Rome-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <vendor>AMD</vendor>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <microcode version='16777317'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <signature family='23' model='49' stepping='0'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='x2apic'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='tsc-deadline'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='osxsave'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='hypervisor'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='tsc_adjust'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='spec-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='stibp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='arch-capabilities'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='ssbd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='cmp_legacy'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='topoext'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='virt-ssbd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='lbrv'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='tsc-scale'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='vmcb-clean'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='pause-filter'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='pfthreshold'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='svme-addr-chk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='rdctl-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='skip-l1dfl-vmentry'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='mds-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature name='pschange-mc-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <pages unit='KiB' size='4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <pages unit='KiB' size='2048'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <pages unit='KiB' size='1048576'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </cpu>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <power_management>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <suspend_mem/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </power_management>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <iommu support='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <migration_features>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <live/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <uri_transports>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <uri_transport>tcp</uri_transport>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <uri_transport>rdma</uri_transport>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </uri_transports>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </migration_features>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <topology>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <cells num='1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <cell id='0'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:          <memory unit='KiB'>7864316</memory>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:          <pages unit='KiB' size='4'>1966079</pages>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:          <pages unit='KiB' size='2048'>0</pages>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:          <distances>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:            <sibling id='0' value='10'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:          </distances>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:          <cpus num='8'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:          </cpus>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        </cell>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </cells>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </topology>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <cache>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </cache>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <secmodel>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model>selinux</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <doi>0</doi>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </secmodel>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <secmodel>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model>dac</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <doi>0</doi>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </secmodel>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  </host>
Jan 21 09:03:30 np0005590528 nova_compute[238343]: 
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <guest>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <os_type>hvm</os_type>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <arch name='i686'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <wordsize>32</wordsize>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <domain type='qemu'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <domain type='kvm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </arch>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <features>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <pae/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <nonpae/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <acpi default='on' toggle='yes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <apic default='on' toggle='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <cpuselection/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <deviceboot/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <disksnapshot default='on' toggle='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <externalSnapshot/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </features>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  </guest>
Jan 21 09:03:30 np0005590528 nova_compute[238343]: 
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <guest>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <os_type>hvm</os_type>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <arch name='x86_64'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <wordsize>64</wordsize>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <domain type='qemu'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <domain type='kvm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </arch>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <features>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <acpi default='on' toggle='yes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <apic default='on' toggle='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <cpuselection/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <deviceboot/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <disksnapshot default='on' toggle='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <externalSnapshot/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </features>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  </guest>
Jan 21 09:03:30 np0005590528 nova_compute[238343]: 
Jan 21 09:03:30 np0005590528 nova_compute[238343]: </capabilities>
Jan 21 09:03:30 np0005590528 nova_compute[238343]: #033[00m
Jan 21 09:03:30 np0005590528 nova_compute[238343]: 2026-01-21 14:03:30.802 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 21 09:03:30 np0005590528 nova_compute[238343]: 2026-01-21 14:03:30.831 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 21 09:03:30 np0005590528 nova_compute[238343]: <domainCapabilities>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <domain>kvm</domain>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <arch>i686</arch>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <vcpu max='4096'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <iothreads supported='yes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <os supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <enum name='firmware'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <loader supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>rom</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>pflash</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='readonly'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>yes</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>no</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='secure'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>no</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </loader>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  </os>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <cpu>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <mode name='host-passthrough' supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='hostPassthroughMigratable'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>on</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>off</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <mode name='maximum' supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='maximumMigratable'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>on</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>off</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <mode name='host-model' supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <vendor>AMD</vendor>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='x2apic'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='hypervisor'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='stibp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='ssbd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='overflow-recov'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='succor'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='lbrv'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc-scale'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='flushbyasid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='pause-filter'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='pfthreshold'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='disable' name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <mode name='custom' supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-noTSX'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='ClearwaterForest'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ddpd-u'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sha512'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sm3'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sm4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='ClearwaterForest-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ddpd-u'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sha512'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sm3'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sm4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Denverton'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Dhyana-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Turin'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbpb'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Turin-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbpb'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v5'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-128'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-256'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-512'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-128'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-256'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-512'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-noTSX'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v5'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v6'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v7'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='KnightsMill'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512er'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512pf'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='KnightsMill-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512er'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512pf'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G4-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G5'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tbm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G5-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tbm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SierraForest'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v5'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Snowridge'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='athlon'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='athlon-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='core2duo'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='core2duo-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='coreduo'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='coreduo-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='n270'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='n270-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='phenom'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='phenom-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  </cpu>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <memoryBacking supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <enum name='sourceType'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <value>file</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <value>anonymous</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <value>memfd</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  </memoryBacking>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <devices>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <disk supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='diskDevice'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>disk</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>cdrom</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>floppy</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>lun</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='bus'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>fdc</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>scsi</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>sata</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>virtio-transitional</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>virtio-non-transitional</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </disk>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <graphics supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>vnc</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>egl-headless</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>dbus</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </graphics>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <video supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='modelType'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>vga</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>cirrus</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>none</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>bochs</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>ramfb</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </video>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <hostdev supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='mode'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>subsystem</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='startupPolicy'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>default</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>mandatory</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>requisite</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>optional</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='subsysType'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>pci</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>scsi</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='capsType'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='pciBackend'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </hostdev>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <rng supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>virtio-transitional</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>virtio-non-transitional</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>random</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>egd</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>builtin</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </rng>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <filesystem supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='driverType'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>path</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>handle</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>virtiofs</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </filesystem>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <tpm supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>tpm-tis</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>tpm-crb</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>emulator</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>external</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='backendVersion'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>2.0</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </tpm>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <redirdev supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='bus'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </redirdev>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <channel supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>pty</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>unix</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </channel>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <crypto supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='model'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>qemu</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>builtin</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </crypto>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <interface supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='backendType'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>default</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>passt</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </interface>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <panic supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>isa</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>hyperv</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </panic>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <console supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>null</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>vc</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>pty</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>dev</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>file</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>pipe</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>stdio</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>udp</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>tcp</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>unix</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>qemu-vdagent</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>dbus</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </console>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  </devices>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <features>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <gic supported='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <vmcoreinfo supported='yes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <genid supported='yes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <backingStoreInput supported='yes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <backup supported='yes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <async-teardown supported='yes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <s390-pv supported='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <ps2 supported='yes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <tdx supported='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <sev supported='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <sgx supported='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <hyperv supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='features'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>relaxed</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>vapic</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>spinlocks</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>vpindex</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>runtime</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>synic</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>stimer</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>reset</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>vendor_id</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>frequencies</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>reenlightenment</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>tlbflush</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>ipi</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>avic</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>emsr_bitmap</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>xmm_input</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <defaults>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <spinlocks>4095</spinlocks>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <stimer_direct>on</stimer_direct>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </defaults>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </hyperv>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <launchSecurity supported='no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  </features>
Jan 21 09:03:30 np0005590528 nova_compute[238343]: </domainCapabilities>
Jan 21 09:03:30 np0005590528 nova_compute[238343]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 09:03:30 np0005590528 nova_compute[238343]: 2026-01-21 14:03:30.851 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 21 09:03:30 np0005590528 nova_compute[238343]: <domainCapabilities>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <domain>kvm</domain>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <arch>i686</arch>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <vcpu max='240'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <iothreads supported='yes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <os supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <enum name='firmware'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <loader supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>rom</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>pflash</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='readonly'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>yes</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>no</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='secure'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>no</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </loader>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  </os>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:  <cpu>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <mode name='host-passthrough' supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='hostPassthroughMigratable'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>on</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>off</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <mode name='maximum' supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <enum name='maximumMigratable'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>on</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <value>off</value>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <mode name='host-model' supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <vendor>AMD</vendor>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='x2apic'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='hypervisor'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='stibp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='ssbd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='overflow-recov'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='succor'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='lbrv'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc-scale'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='flushbyasid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='pause-filter'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='pfthreshold'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <feature policy='disable' name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:    <mode name='custom' supported='yes'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-noTSX'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 python3.9[239193]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='ClearwaterForest'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ddpd-u'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sha512'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sm3'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sm4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='ClearwaterForest-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ddpd-u'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sha512'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sm3'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sm4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Denverton'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Dhyana-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Turin'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbpb'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Turin-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbpb'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v5'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-128'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-256'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-512'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-128'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-256'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx10-512'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-noTSX'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v5'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v6'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v7'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='KnightsMill'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512er'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512pf'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='KnightsMill-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512er'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512pf'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G4-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G5'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tbm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G5-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tbm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SierraForest'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v1'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v2'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v3'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v4'>
Jan 21 09:03:30 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='athlon'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='athlon-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='core2duo'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='core2duo-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='coreduo'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='coreduo-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='n270'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='n270-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 09:03:31 np0005590528 systemd[1]: Stopping nova_compute container...
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='phenom'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='phenom-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </cpu>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <memoryBacking supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <enum name='sourceType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>file</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>anonymous</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>memfd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </memoryBacking>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <devices>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <disk supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='diskDevice'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>disk</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>cdrom</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>floppy</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>lun</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='bus'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>ide</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>fdc</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>scsi</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>sata</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-non-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </disk>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <graphics supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vnc</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>egl-headless</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>dbus</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </graphics>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <video supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='modelType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vga</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>cirrus</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>none</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>bochs</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>ramfb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </video>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <hostdev supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='mode'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>subsystem</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='startupPolicy'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>default</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>mandatory</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>requisite</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>optional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='subsysType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pci</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>scsi</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='capsType'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='pciBackend'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </hostdev>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <rng supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-non-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>random</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>egd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>builtin</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </rng>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <filesystem supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='driverType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>path</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>handle</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtiofs</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </filesystem>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <tpm supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tpm-tis</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tpm-crb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>emulator</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>external</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendVersion'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>2.0</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </tpm>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <redirdev supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='bus'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </redirdev>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <channel supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pty</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>unix</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </channel>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <crypto supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>qemu</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>builtin</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </crypto>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <interface supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>default</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>passt</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </interface>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <panic supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>isa</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>hyperv</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </panic>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <console supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>null</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vc</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pty</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>dev</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>file</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pipe</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>stdio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>udp</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tcp</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>unix</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>qemu-vdagent</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>dbus</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </console>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </devices>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <features>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <gic supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <vmcoreinfo supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <genid supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <backingStoreInput supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <backup supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <async-teardown supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <s390-pv supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <ps2 supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <tdx supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <sev supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <sgx supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <hyperv supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='features'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>relaxed</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vapic</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>spinlocks</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vpindex</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>runtime</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>synic</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>stimer</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>reset</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vendor_id</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>frequencies</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>reenlightenment</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tlbflush</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>ipi</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>avic</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>emsr_bitmap</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>xmm_input</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <defaults>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <spinlocks>4095</spinlocks>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <stimer_direct>on</stimer_direct>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </defaults>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </hyperv>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <launchSecurity supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </features>
Jan 21 09:03:31 np0005590528 nova_compute[238343]: </domainCapabilities>
Jan 21 09:03:31 np0005590528 nova_compute[238343]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:30.928 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:30.933 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 21 09:03:31 np0005590528 nova_compute[238343]: <domainCapabilities>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <domain>kvm</domain>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <arch>x86_64</arch>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <vcpu max='4096'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <iothreads supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <os supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <enum name='firmware'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>efi</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <loader supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>rom</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pflash</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='readonly'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>yes</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>no</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='secure'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>yes</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>no</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </loader>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </os>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <cpu>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <mode name='host-passthrough' supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='hostPassthroughMigratable'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>on</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>off</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <mode name='maximum' supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='maximumMigratable'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>on</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>off</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <mode name='host-model' supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <vendor>AMD</vendor>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='x2apic'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='hypervisor'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='stibp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='ssbd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='overflow-recov'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='succor'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='lbrv'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc-scale'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='flushbyasid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='pause-filter'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='pfthreshold'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='disable' name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <mode name='custom' supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-noTSX'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='ClearwaterForest'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ddpd-u'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sha512'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sm3'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sm4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='ClearwaterForest-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ddpd-u'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sha512'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sm3'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sm4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Denverton'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Dhyana-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Turin'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbpb'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Turin-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbpb'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-128'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-256'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-512'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-128'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-256'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-512'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-noTSX'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v6'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v7'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='KnightsMill'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512er'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512pf'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='KnightsMill-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512er'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512pf'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G4-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tbm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G5-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tbm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SierraForest'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='athlon'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='athlon-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='core2duo'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='core2duo-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='coreduo'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='coreduo-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='n270'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='n270-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='phenom'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='phenom-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </cpu>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <memoryBacking supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <enum name='sourceType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>file</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>anonymous</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>memfd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </memoryBacking>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <devices>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <disk supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='diskDevice'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>disk</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>cdrom</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>floppy</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>lun</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='bus'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>fdc</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>scsi</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>sata</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-non-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </disk>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <graphics supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vnc</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>egl-headless</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>dbus</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </graphics>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <video supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='modelType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vga</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>cirrus</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>none</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>bochs</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>ramfb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </video>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <hostdev supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='mode'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>subsystem</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='startupPolicy'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>default</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>mandatory</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>requisite</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>optional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='subsysType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pci</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>scsi</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='capsType'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='pciBackend'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </hostdev>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <rng supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-non-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>random</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>egd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>builtin</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </rng>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <filesystem supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='driverType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>path</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>handle</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtiofs</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </filesystem>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <tpm supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tpm-tis</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tpm-crb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>emulator</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>external</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendVersion'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>2.0</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </tpm>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <redirdev supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='bus'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </redirdev>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <channel supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pty</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>unix</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </channel>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <crypto supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>qemu</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>builtin</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </crypto>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <interface supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>default</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>passt</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </interface>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <panic supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>isa</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>hyperv</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </panic>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <console supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>null</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vc</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pty</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>dev</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>file</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pipe</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>stdio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>udp</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tcp</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>unix</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>qemu-vdagent</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>dbus</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </console>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </devices>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <features>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <gic supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <vmcoreinfo supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <genid supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <backingStoreInput supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <backup supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <async-teardown supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <s390-pv supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <ps2 supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <tdx supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <sev supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <sgx supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <hyperv supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='features'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>relaxed</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vapic</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>spinlocks</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vpindex</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>runtime</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>synic</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>stimer</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>reset</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vendor_id</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>frequencies</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>reenlightenment</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tlbflush</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>ipi</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>avic</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>emsr_bitmap</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>xmm_input</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <defaults>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <spinlocks>4095</spinlocks>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <stimer_direct>on</stimer_direct>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </defaults>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </hyperv>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <launchSecurity supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </features>
Jan 21 09:03:31 np0005590528 nova_compute[238343]: </domainCapabilities>
Jan 21 09:03:31 np0005590528 nova_compute[238343]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.005 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 21 09:03:31 np0005590528 nova_compute[238343]: <domainCapabilities>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <domain>kvm</domain>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <arch>x86_64</arch>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <vcpu max='240'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <iothreads supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <os supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <enum name='firmware'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <loader supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>rom</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pflash</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='readonly'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>yes</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>no</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='secure'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>no</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </loader>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </os>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <cpu>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <mode name='host-passthrough' supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='hostPassthroughMigratable'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>on</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>off</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <mode name='maximum' supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='maximumMigratable'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>on</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>off</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <mode name='host-model' supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <vendor>AMD</vendor>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='x2apic'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='hypervisor'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='stibp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='ssbd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='overflow-recov'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='succor'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='lbrv'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='tsc-scale'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='flushbyasid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='pause-filter'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='pfthreshold'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <feature policy='disable' name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <mode name='custom' supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-noTSX'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Broadwell-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='ClearwaterForest'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ddpd-u'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sha512'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sm3'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sm4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='ClearwaterForest-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ddpd-u'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sha512'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sm3'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sm4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Cooperlake-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Denverton'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Denverton-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Dhyana-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Milan-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Rome-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Turin'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbpb'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-Turin-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amd-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='auto-ibrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='perfmon-v2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbpb'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='stibp-always-on'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='EPYC-v5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-128'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-256'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-512'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='GraniteRapids-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-128'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-256'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx10-512'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='prefetchiti'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-noTSX'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Haswell-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v6'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Icelake-Server-v7'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='IvyBridge-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='KnightsMill'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512er'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512pf'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='KnightsMill-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512er'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512pf'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G4-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tbm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Opteron_G5-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fma4'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tbm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xop'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SapphireRapids-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='amx-tile'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-bf16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-fp16'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bitalg'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrc'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fzrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='la57'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='taa-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SierraForest'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='SierraForest-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ifma'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cmpccxadd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fbsdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='fsrs'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ibrs-all'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='intel-psfd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='lam'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mcdt-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pbrsb-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='psdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='serialize'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vaes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Client-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='hle'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='rtm'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Skylake-Server-v5'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512bw'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512cd'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512dq'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512f'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='avx512vl'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='invpcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pcid'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='pku'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='mpx'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v2'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v3'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='core-capability'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='split-lock-detect'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='Snowridge-v4'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='cldemote'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='erms'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='gfni'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdir64b'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='movdiri'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='xsaves'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='athlon'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='athlon-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='core2duo'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='core2duo-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='coreduo'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='coreduo-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='n270'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='n270-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='ss'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='phenom'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <blockers model='phenom-v1'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnow'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <feature name='3dnowext'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </blockers>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </mode>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </cpu>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <memoryBacking supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <enum name='sourceType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>file</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>anonymous</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <value>memfd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </memoryBacking>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <devices>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <disk supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='diskDevice'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>disk</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>cdrom</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>floppy</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>lun</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='bus'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>ide</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>fdc</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>scsi</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>sata</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-non-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </disk>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <graphics supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vnc</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>egl-headless</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>dbus</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </graphics>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <video supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='modelType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vga</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>cirrus</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>none</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>bochs</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>ramfb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </video>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <hostdev supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='mode'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>subsystem</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='startupPolicy'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>default</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>mandatory</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>requisite</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>optional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='subsysType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pci</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>scsi</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='capsType'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='pciBackend'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </hostdev>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <rng supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtio-non-transitional</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>random</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>egd</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>builtin</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </rng>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <filesystem supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='driverType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>path</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>handle</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>virtiofs</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </filesystem>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <tpm supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tpm-tis</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tpm-crb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>emulator</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>external</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendVersion'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>2.0</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </tpm>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <redirdev supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='bus'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>usb</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </redirdev>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <channel supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pty</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>unix</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </channel>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <crypto supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>qemu</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendModel'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>builtin</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </crypto>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <interface supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='backendType'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>default</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>passt</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </interface>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <panic supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='model'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>isa</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>hyperv</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </panic>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <console supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='type'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>null</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vc</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pty</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>dev</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>file</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>pipe</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>stdio</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>udp</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tcp</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>unix</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>qemu-vdagent</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>dbus</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </console>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </devices>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  <features>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <gic supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <vmcoreinfo supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <genid supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <backingStoreInput supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <backup supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <async-teardown supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <s390-pv supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <ps2 supported='yes'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <tdx supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <sev supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <sgx supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <hyperv supported='yes'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <enum name='features'>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>relaxed</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vapic</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>spinlocks</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vpindex</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>runtime</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>synic</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>stimer</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>reset</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>vendor_id</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>frequencies</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>reenlightenment</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>tlbflush</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>ipi</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>avic</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>emsr_bitmap</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <value>xmm_input</value>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </enum>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      <defaults>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <spinlocks>4095</spinlocks>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <stimer_direct>on</stimer_direct>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:      </defaults>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    </hyperv>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:    <launchSecurity supported='no'/>
Jan 21 09:03:31 np0005590528 nova_compute[238343]:  </features>
Jan 21 09:03:31 np0005590528 nova_compute[238343]: </domainCapabilities>
Jan 21 09:03:31 np0005590528 nova_compute[238343]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.072 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.072 238347 INFO nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Secure Boot support detected#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.075 238347 INFO nova.virt.libvirt.driver [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.075 238347 INFO nova.virt.libvirt.driver [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.087 238347 DEBUG nova.virt.libvirt.driver [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.184 238347 INFO nova.virt.node [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Determined node identity 172aa181-ce4f-4953-808e-b8a26e60249f from /var/lib/nova/compute_id#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.207 238347 WARNING nova.compute.manager [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Compute nodes ['172aa181-ce4f-4953-808e-b8a26e60249f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.249 238347 INFO nova.compute.manager [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.292 238347 WARNING nova.compute.manager [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.292 238347 DEBUG oslo_concurrency.lockutils [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.293 238347 DEBUG oslo_concurrency.lockutils [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.293 238347 DEBUG oslo_concurrency.lockutils [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.293 238347 DEBUG nova.compute.resource_tracker [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.293 238347 DEBUG oslo_concurrency.processutils [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.347 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.348 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 09:03:31 np0005590528 nova_compute[238343]: 2026-01-21 14:03:31.348 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 09:03:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:03:31 np0005590528 virtqemud[238983]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 21 09:03:31 np0005590528 virtqemud[238983]: hostname: compute-0
Jan 21 09:03:31 np0005590528 virtqemud[238983]: End of file while reading data: Input/output error
Jan 21 09:03:31 np0005590528 systemd[1]: libpod-7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e.scope: Deactivated successfully.
Jan 21 09:03:31 np0005590528 systemd[1]: libpod-7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e.scope: Consumed 3.241s CPU time.
Jan 21 09:03:31 np0005590528 podman[239201]: 2026-01-21 14:03:31.804878737 +0000 UTC m=+0.800379357 container died 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 09:03:31 np0005590528 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e-userdata-shm.mount: Deactivated successfully.
Jan 21 09:03:31 np0005590528 systemd[1]: var-lib-containers-storage-overlay-68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff-merged.mount: Deactivated successfully.
Jan 21 09:03:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:32 np0005590528 podman[239201]: 2026-01-21 14:03:32.799330212 +0000 UTC m=+1.794830812 container cleanup 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 09:03:32 np0005590528 podman[239201]: nova_compute
Jan 21 09:03:32 np0005590528 podman[239232]: nova_compute
Jan 21 09:03:32 np0005590528 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 21 09:03:32 np0005590528 systemd[1]: Stopped nova_compute container.
Jan 21 09:03:32 np0005590528 systemd[1]: Starting nova_compute container...
Jan 21 09:03:33 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:03:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:33 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:33 np0005590528 podman[239246]: 2026-01-21 14:03:33.482461725 +0000 UTC m=+0.594721410 container init 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:03:33 np0005590528 podman[239246]: 2026-01-21 14:03:33.489725124 +0000 UTC m=+0.601984779 container start 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm)
Jan 21 09:03:33 np0005590528 podman[239246]: nova_compute
Jan 21 09:03:33 np0005590528 nova_compute[239261]: + sudo -E kolla_set_configs
Jan 21 09:03:33 np0005590528 systemd[1]: Started nova_compute container.
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Validating config file
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying service configuration files
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Deleting /etc/ceph
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Creating directory /etc/ceph
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/ceph
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Writing out command to execute
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 09:03:33 np0005590528 nova_compute[239261]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 09:03:33 np0005590528 nova_compute[239261]: ++ cat /run_command
Jan 21 09:03:33 np0005590528 nova_compute[239261]: + CMD=nova-compute
Jan 21 09:03:33 np0005590528 nova_compute[239261]: + ARGS=
Jan 21 09:03:33 np0005590528 nova_compute[239261]: + sudo kolla_copy_cacerts
Jan 21 09:03:33 np0005590528 nova_compute[239261]: + [[ ! -n '' ]]
Jan 21 09:03:33 np0005590528 nova_compute[239261]: + . kolla_extend_start
Jan 21 09:03:33 np0005590528 nova_compute[239261]: + echo 'Running command: '\''nova-compute'\'''
Jan 21 09:03:33 np0005590528 nova_compute[239261]: Running command: 'nova-compute'
Jan 21 09:03:33 np0005590528 nova_compute[239261]: + umask 0022
Jan 21 09:03:33 np0005590528 nova_compute[239261]: + exec nova-compute
Jan 21 09:03:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:03:33.893 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:03:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:03:33.894 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:03:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:03:33.894 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:03:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:34 np0005590528 python3.9[239424]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 21 09:03:34 np0005590528 systemd[1]: Started libpod-conmon-be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c.scope.
Jan 21 09:03:34 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:03:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73754ab590c92a8ebd550293de75b61a5236028489e78a0708a64563da83fab0/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73754ab590c92a8ebd550293de75b61a5236028489e78a0708a64563da83fab0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73754ab590c92a8ebd550293de75b61a5236028489e78a0708a64563da83fab0/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:34 np0005590528 podman[239452]: 2026-01-21 14:03:34.489889759 +0000 UTC m=+0.133090472 container init be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 09:03:34 np0005590528 podman[239452]: 2026-01-21 14:03:34.506464227 +0000 UTC m=+0.149664910 container start be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 09:03:34 np0005590528 python3.9[239424]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Applying nova statedir ownership
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 21 09:03:34 np0005590528 nova_compute_init[239473]: INFO:nova_statedir:Nova statedir ownership complete
Jan 21 09:03:34 np0005590528 systemd[1]: libpod-be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c.scope: Deactivated successfully.
Jan 21 09:03:34 np0005590528 podman[239488]: 2026-01-21 14:03:34.625492143 +0000 UTC m=+0.024794600 container died be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 09:03:34 np0005590528 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c-userdata-shm.mount: Deactivated successfully.
Jan 21 09:03:34 np0005590528 systemd[1]: var-lib-containers-storage-overlay-73754ab590c92a8ebd550293de75b61a5236028489e78a0708a64563da83fab0-merged.mount: Deactivated successfully.
Jan 21 09:03:34 np0005590528 podman[239488]: 2026-01-21 14:03:34.652773374 +0000 UTC m=+0.052075831 container cleanup be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:03:34 np0005590528 systemd[1]: libpod-conmon-be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c.scope: Deactivated successfully.
Jan 21 09:03:35 np0005590528 systemd[1]: session-50.scope: Deactivated successfully.
Jan 21 09:03:35 np0005590528 systemd[1]: session-50.scope: Consumed 2min 3.205s CPU time.
Jan 21 09:03:35 np0005590528 systemd-logind[780]: Session 50 logged out. Waiting for processes to exit.
Jan 21 09:03:35 np0005590528 systemd-logind[780]: Removed session 50.
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:03:35 np0005590528 nova_compute[239261]: 2026-01-21 14:03:35.542 239265 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 09:03:35 np0005590528 nova_compute[239261]: 2026-01-21 14:03:35.542 239265 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 09:03:35 np0005590528 nova_compute[239261]: 2026-01-21 14:03:35.543 239265 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 21 09:03:35 np0005590528 nova_compute[239261]: 2026-01-21 14:03:35.543 239265 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 21 09:03:35 np0005590528 nova_compute[239261]: 2026-01-21 14:03:35.685 239265 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:03:35 np0005590528 nova_compute[239261]: 2026-01-21 14:03:35.714 239265 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:03:35 np0005590528 nova_compute[239261]: 2026-01-21 14:03:35.714 239265 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:03:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:03:35 np0005590528 podman[239682]: 2026-01-21 14:03:35.944834476 +0000 UTC m=+0.043873680 container create e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:03:35 np0005590528 systemd[1]: Started libpod-conmon-e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3.scope.
Jan 21 09:03:36 np0005590528 podman[239682]: 2026-01-21 14:03:35.922457166 +0000 UTC m=+0.021496170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:03:36 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:03:36 np0005590528 podman[239682]: 2026-01-21 14:03:36.036507629 +0000 UTC m=+0.135546673 container init e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 09:03:36 np0005590528 podman[239682]: 2026-01-21 14:03:36.044965577 +0000 UTC m=+0.144004611 container start e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 09:03:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:03:36 np0005590528 podman[239682]: 2026-01-21 14:03:36.049121419 +0000 UTC m=+0.148160513 container attach e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 09:03:36 np0005590528 flamboyant_moser[239699]: 167 167
Jan 21 09:03:36 np0005590528 systemd[1]: libpod-e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3.scope: Deactivated successfully.
Jan 21 09:03:36 np0005590528 podman[239682]: 2026-01-21 14:03:36.053252461 +0000 UTC m=+0.152291485 container died e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:03:36 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ee7e10d3bbb4a31c0f93e35eb4ca6c8e37a30a0d952dafabb599399cb750fc12-merged.mount: Deactivated successfully.
Jan 21 09:03:36 np0005590528 podman[239682]: 2026-01-21 14:03:36.13986336 +0000 UTC m=+0.238902364 container remove e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 09:03:36 np0005590528 systemd[1]: libpod-conmon-e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3.scope: Deactivated successfully.
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.156 239265 INFO nova.virt.driver [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.265 239265 INFO nova.compute.provider_config [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.285 239265 DEBUG oslo_concurrency.lockutils [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.285 239265 DEBUG oslo_concurrency.lockutils [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.286 239265 DEBUG oslo_concurrency.lockutils [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.286 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.286 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.287 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.287 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.287 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.288 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.288 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.288 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.289 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.289 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.289 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.289 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.289 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.290 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.290 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.290 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.290 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.291 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.291 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.291 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.291 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.291 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.292 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.292 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.292 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.292 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.292 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.293 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.293 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.293 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.293 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.294 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.294 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.294 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.294 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.294 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.295 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.295 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.295 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.295 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.295 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.296 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.296 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.296 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.297 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.297 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 podman[239724]: 2026-01-21 14:03:36.296697405 +0000 UTC m=+0.040426905 container create cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.297 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.297 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.297 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.298 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.298 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.298 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.299 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.299 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.299 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.300 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.300 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.300 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.300 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.301 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.301 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.301 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.301 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.302 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.302 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.302 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.302 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.303 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.303 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.303 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.304 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.304 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.304 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.305 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.305 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.305 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.305 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.306 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.306 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.306 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.306 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.307 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.307 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.307 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.307 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.308 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.308 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.308 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.309 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.309 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.309 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.309 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.310 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.310 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.310 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.310 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.311 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.311 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.311 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.311 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.312 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.312 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.312 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.312 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.312 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.326 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.326 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.326 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.326 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.326 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.327 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.327 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.327 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.327 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.327 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.328 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.328 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.328 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.328 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.328 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.329 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.329 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.329 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.329 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.329 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.331 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.331 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.331 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.331 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.331 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.332 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.332 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.332 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.332 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.332 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.333 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.333 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.333 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.333 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.333 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.335 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.335 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.335 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.335 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.335 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.336 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.336 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.336 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.336 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.336 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.337 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.337 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.337 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.337 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.337 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.339 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.339 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.339 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.339 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.339 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.340 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.340 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.340 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.340 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.340 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.341 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.341 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.341 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.341 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.341 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.342 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.342 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.342 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 systemd[1]: Started libpod-conmon-cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3.scope.
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.342 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.342 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.343 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.343 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.343 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.343 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.343 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.344 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.344 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.344 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.344 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.344 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.346 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.346 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.346 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.346 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.346 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.347 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.347 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.347 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.347 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.347 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.348 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.348 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.348 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.348 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.348 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.350 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.350 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.350 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.350 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.350 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.351 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.351 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.351 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.351 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.351 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.352 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.352 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.352 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.352 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.357 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.357 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.357 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.357 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.357 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.374 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.374 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.374 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.374 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.374 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 podman[239724]: 2026-01-21 14:03:36.278267792 +0000 UTC m=+0.021997312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.376 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.376 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.376 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.376 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.376 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.377 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.377 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.377 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.377 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.377 239265 WARNING oslo_config.cfg [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 21 09:03:36 np0005590528 nova_compute[239261]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 21 09:03:36 np0005590528 nova_compute[239261]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 21 09:03:36 np0005590528 nova_compute[239261]: and ``live_migration_inbound_addr`` respectively.
Jan 21 09:03:36 np0005590528 nova_compute[239261]: ).  Its value may be silently ignored in the future.#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.378 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.378 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.378 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.378 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.378 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.379 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.379 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.379 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.379 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.379 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.380 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.380 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.380 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.380 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.380 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.381 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.381 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.381 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.381 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rbd_secret_uuid        = 2f0e9cad-f0a3-5869-9cc3-8d84d071866a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.381 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.382 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.382 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.382 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.382 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.382 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.383 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.383 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.383 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.383 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.383 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.384 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.384 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.384 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.384 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.384 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.385 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.385 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.385 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.385 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.385 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.386 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.386 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.386 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.386 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.386 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.387 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.387 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.387 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.387 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.387 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 podman[239724]: 2026-01-21 14:03:36.395700809 +0000 UTC m=+0.139430579 container init cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.402 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.402 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 podman[239724]: 2026-01-21 14:03:36.40226674 +0000 UTC m=+0.145996240 container start cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.402 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.402 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.402 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.403 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.403 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.403 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.403 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.403 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 podman[239724]: 2026-01-21 14:03:36.405664314 +0000 UTC m=+0.149393854 container attach cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.442 239265 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.454 239265 INFO nova.virt.node [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Determined node identity 172aa181-ce4f-4953-808e-b8a26e60249f from /var/lib/nova/compute_id#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.455 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.455 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.455 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.456 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.465 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f41aa53e2b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.467 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f41aa53e2b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.468 239265 INFO nova.virt.libvirt.driver [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.473 239265 INFO nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Libvirt host capabilities <capabilities>
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <host>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <uuid>7823760d-0166-4122-8fb2-3165351e57e7</uuid>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <cpu>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <arch>x86_64</arch>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model>EPYC-Rome-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <vendor>AMD</vendor>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <microcode version='16777317'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <signature family='23' model='49' stepping='0'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='x2apic'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='tsc-deadline'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='osxsave'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='hypervisor'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='tsc_adjust'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='spec-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='stibp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='arch-capabilities'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='cmp_legacy'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='topoext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='virt-ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='lbrv'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='tsc-scale'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='vmcb-clean'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='pause-filter'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='pfthreshold'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='svme-addr-chk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='rdctl-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='skip-l1dfl-vmentry'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='mds-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature name='pschange-mc-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <pages unit='KiB' size='4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <pages unit='KiB' size='2048'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <pages unit='KiB' size='1048576'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </cpu>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <power_management>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <suspend_mem/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </power_management>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <iommu support='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <migration_features>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <live/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <uri_transports>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <uri_transport>tcp</uri_transport>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <uri_transport>rdma</uri_transport>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </uri_transports>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </migration_features>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <topology>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <cells num='1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <cell id='0'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:          <memory unit='KiB'>7864316</memory>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:          <pages unit='KiB' size='4'>1966079</pages>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:          <pages unit='KiB' size='2048'>0</pages>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:          <distances>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:            <sibling id='0' value='10'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:          </distances>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:          <cpus num='8'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:          </cpus>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        </cell>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </cells>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </topology>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <cache>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </cache>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <secmodel>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model>selinux</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <doi>0</doi>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </secmodel>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <secmodel>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model>dac</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <doi>0</doi>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </secmodel>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </host>
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <guest>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <os_type>hvm</os_type>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <arch name='i686'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <wordsize>32</wordsize>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <domain type='qemu'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <domain type='kvm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </arch>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <features>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <pae/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <nonpae/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <acpi default='on' toggle='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <apic default='on' toggle='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <cpuselection/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <deviceboot/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <disksnapshot default='on' toggle='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <externalSnapshot/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </features>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </guest>
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <guest>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <os_type>hvm</os_type>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <arch name='x86_64'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <wordsize>64</wordsize>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <domain type='qemu'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <domain type='kvm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </arch>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <features>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <acpi default='on' toggle='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <apic default='on' toggle='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <cpuselection/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <deviceboot/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <disksnapshot default='on' toggle='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <externalSnapshot/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </features>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </guest>
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 
Jan 21 09:03:36 np0005590528 nova_compute[239261]: </capabilities>
Jan 21 09:03:36 np0005590528 nova_compute[239261]: #033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.480 239265 DEBUG nova.virt.libvirt.volume.mount [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.486 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.489 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 21 09:03:36 np0005590528 nova_compute[239261]: <domainCapabilities>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <domain>kvm</domain>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <arch>i686</arch>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <vcpu max='4096'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <iothreads supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <os supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <enum name='firmware'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <loader supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>rom</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pflash</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='readonly'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>yes</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>no</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='secure'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>no</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </loader>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </os>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <cpu>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='host-passthrough' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='hostPassthroughMigratable'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>on</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>off</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='maximum' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='maximumMigratable'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>on</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>off</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='host-model' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <vendor>AMD</vendor>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='x2apic'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='hypervisor'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='stibp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='overflow-recov'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='succor'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='lbrv'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='tsc-scale'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='flushbyasid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='pause-filter'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='pfthreshold'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='disable' name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='custom' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='ClearwaterForest'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ddpd-u'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sha512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm3'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='ClearwaterForest-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ddpd-u'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sha512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm3'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cooperlake'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cooperlake-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cooperlake-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Dhyana-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Genoa'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='perfmon-v2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Turin'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='perfmon-v2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbpb'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Turin-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='perfmon-v2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbpb'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-128'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-256'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-128'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-256'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v6'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v7'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='KnightsMill'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512er'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512pf'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='KnightsMill-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512er'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512pf'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G4-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tbm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G5-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tbm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='athlon'>
Jan 21 09:03:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='athlon-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='core2duo'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='core2duo-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='coreduo'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='coreduo-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='n270'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='n270-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='phenom'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='phenom-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </cpu>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <memoryBacking supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <enum name='sourceType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>file</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>anonymous</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>memfd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </memoryBacking>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <devices>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <disk supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='diskDevice'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>disk</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>cdrom</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>floppy</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>lun</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='bus'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>fdc</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>scsi</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>usb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>sata</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-non-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </disk>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <graphics supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vnc</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>egl-headless</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>dbus</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </graphics>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <video supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='modelType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vga</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>cirrus</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>none</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>bochs</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>ramfb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </video>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <hostdev supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='mode'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>subsystem</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='startupPolicy'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>default</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>mandatory</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>requisite</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>optional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='subsysType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>usb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pci</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>scsi</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='capsType'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='pciBackend'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </hostdev>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <rng supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-non-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendModel'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>random</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>egd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>builtin</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </rng>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <filesystem supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='driverType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>path</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>handle</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtiofs</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </filesystem>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <tpm supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>tpm-tis</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>tpm-crb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendModel'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>emulator</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>external</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendVersion'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>2.0</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </tpm>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <redirdev supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='bus'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>usb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </redirdev>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <channel supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pty</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>unix</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </channel>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <crypto supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>qemu</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendModel'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>builtin</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </crypto>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <interface supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>default</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>passt</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </interface>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <panic supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>isa</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>hyperv</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </panic>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <console supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>null</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vc</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pty</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>dev</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>file</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pipe</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>stdio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>udp</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>tcp</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>unix</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>qemu-vdagent</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>dbus</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </console>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </devices>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <features>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <gic supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <vmcoreinfo supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <genid supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <backingStoreInput supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <backup supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <async-teardown supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <s390-pv supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <ps2 supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <tdx supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <sev supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <sgx supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <hyperv supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='features'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>relaxed</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vapic</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>spinlocks</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vpindex</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>runtime</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>synic</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>stimer</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>reset</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vendor_id</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>frequencies</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>reenlightenment</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>tlbflush</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>ipi</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>avic</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>emsr_bitmap</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>xmm_input</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <defaults>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <spinlocks>4095</spinlocks>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <stimer_direct>on</stimer_direct>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </defaults>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </hyperv>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <launchSecurity supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </features>
Jan 21 09:03:36 np0005590528 nova_compute[239261]: </domainCapabilities>
Jan 21 09:03:36 np0005590528 nova_compute[239261]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.496 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 21 09:03:36 np0005590528 nova_compute[239261]: <domainCapabilities>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <domain>kvm</domain>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <arch>i686</arch>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <vcpu max='240'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <iothreads supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <os supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <enum name='firmware'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <loader supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>rom</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pflash</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='readonly'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>yes</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>no</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='secure'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>no</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </loader>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </os>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <cpu>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='host-passthrough' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='hostPassthroughMigratable'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>on</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>off</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='maximum' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='maximumMigratable'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>on</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>off</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='host-model' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <vendor>AMD</vendor>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='x2apic'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='hypervisor'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='stibp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='overflow-recov'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='succor'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='lbrv'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='tsc-scale'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='flushbyasid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='pause-filter'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='pfthreshold'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='disable' name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='custom' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='ClearwaterForest'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ddpd-u'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sha512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm3'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='ClearwaterForest-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ddpd-u'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sha512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm3'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cooperlake'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cooperlake-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cooperlake-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Dhyana-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Genoa'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='perfmon-v2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Turin'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='perfmon-v2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbpb'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Turin-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='perfmon-v2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbpb'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-128'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-256'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-128'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-256'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v6'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v7'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='KnightsMill'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512er'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512pf'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='KnightsMill-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512er'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512pf'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G4-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tbm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G5-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tbm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='athlon'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='athlon-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='core2duo'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='core2duo-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='coreduo'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='coreduo-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='n270'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='n270-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='phenom'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='phenom-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </cpu>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <memoryBacking supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <enum name='sourceType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>file</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>anonymous</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>memfd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </memoryBacking>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <devices>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <disk supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='diskDevice'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>disk</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>cdrom</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>floppy</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>lun</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='bus'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>ide</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>fdc</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>scsi</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>usb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>sata</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-non-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </disk>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <graphics supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vnc</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>egl-headless</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>dbus</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </graphics>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <video supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='modelType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vga</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>cirrus</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>none</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>bochs</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>ramfb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </video>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <hostdev supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='mode'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>subsystem</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='startupPolicy'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>default</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>mandatory</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>requisite</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>optional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='subsysType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>usb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pci</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>scsi</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='capsType'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='pciBackend'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </hostdev>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <rng supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-non-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendModel'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>random</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>egd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>builtin</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </rng>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <filesystem supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='driverType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>path</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>handle</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtiofs</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </filesystem>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <tpm supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>tpm-tis</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>tpm-crb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendModel'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>emulator</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>external</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendVersion'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>2.0</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </tpm>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <redirdev supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='bus'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>usb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </redirdev>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <channel supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pty</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>unix</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </channel>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <crypto supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>qemu</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendModel'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>builtin</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </crypto>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <interface supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>default</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>passt</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </interface>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <panic supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>isa</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>hyperv</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </panic>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <console supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>null</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vc</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pty</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>dev</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>file</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pipe</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>stdio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>udp</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>tcp</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>unix</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>qemu-vdagent</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>dbus</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </console>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </devices>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <features>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <gic supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <vmcoreinfo supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <genid supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <backingStoreInput supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <backup supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <async-teardown supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <s390-pv supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <ps2 supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <tdx supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <sev supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <sgx supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <hyperv supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='features'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>relaxed</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vapic</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>spinlocks</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vpindex</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>runtime</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>synic</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>stimer</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>reset</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vendor_id</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>frequencies</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>reenlightenment</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>tlbflush</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>ipi</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>avic</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>emsr_bitmap</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>xmm_input</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <defaults>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <spinlocks>4095</spinlocks>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <stimer_direct>on</stimer_direct>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <tlbflush_direct>on</tlbflush_direct>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <tlbflush_extended>on</tlbflush_extended>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </defaults>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </hyperv>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <launchSecurity supported='no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </features>
Jan 21 09:03:36 np0005590528 nova_compute[239261]: </domainCapabilities>
Jan 21 09:03:36 np0005590528 nova_compute[239261]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.553 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 21 09:03:36 np0005590528 nova_compute[239261]: 2026-01-21 14:03:36.557 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 21 09:03:36 np0005590528 nova_compute[239261]: <domainCapabilities>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <path>/usr/libexec/qemu-kvm</path>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <domain>kvm</domain>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <arch>x86_64</arch>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <vcpu max='4096'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <iothreads supported='yes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <os supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <enum name='firmware'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>efi</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <loader supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>rom</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pflash</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='readonly'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>yes</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>no</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='secure'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>yes</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>no</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </loader>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </os>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <cpu>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='host-passthrough' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='hostPassthroughMigratable'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>on</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>off</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='maximum' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='maximumMigratable'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>on</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>off</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='host-model' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <vendor>AMD</vendor>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='x2apic'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='tsc-deadline'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='hypervisor'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='tsc_adjust'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='spec-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='stibp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='cmp_legacy'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='overflow-recov'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='succor'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='amd-ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='virt-ssbd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='lbrv'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='tsc-scale'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='vmcb-clean'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='flushbyasid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='pause-filter'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='pfthreshold'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='svme-addr-chk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <feature policy='disable' name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <mode name='custom' supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Broadwell-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cascadelake-Server-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='ClearwaterForest'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ddpd-u'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sha512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm3'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='ClearwaterForest-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ddpd-u'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sha512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm3'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sm4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cooperlake'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cooperlake-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Cooperlake-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Denverton-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Dhyana-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Genoa'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Genoa-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Genoa-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='perfmon-v2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Milan-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Rome-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Turin'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='perfmon-v2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbpb'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-Turin-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amd-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='auto-ibrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vp2intersect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fs-gs-base-ns'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibpb-brtype'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='no-nested-data-bp'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='null-sel-clr-base'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='perfmon-v2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbpb'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='srso-user-kernel-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='stibp-always-on'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='EPYC-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-128'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-256'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='GraniteRapids-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-128'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-256'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx10-512'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='prefetchiti'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Haswell-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-noTSX'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v6'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Icelake-Server-v7'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='IvyBridge-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='KnightsMill'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512er'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512pf'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='KnightsMill-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4fmaps'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-4vnniw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512er'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512pf'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G4-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tbm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Opteron_G5-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fma4'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tbm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xop'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SapphireRapids-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='amx-tile'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-bf16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-fp16'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512-vpopcntdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bitalg'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vbmi2'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrc'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fzrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='la57'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='taa-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='tsx-ldtrk'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='SierraForest-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ifma'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-ne-convert'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx-vnni-int8'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bhi-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='bus-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cmpccxadd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fbsdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='fsrs'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ibrs-all'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='intel-psfd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ipred-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='lam'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mcdt-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pbrsb-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='psdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rrsba-ctrl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='sbdr-ssdp-no'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='serialize'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vaes'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='vpclmulqdq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Client-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='hle'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='rtm'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Skylake-Server-v5'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512bw'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512cd'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512dq'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512f'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='avx512vl'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='invpcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pcid'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='pku'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='mpx'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v2'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v3'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='core-capability'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='split-lock-detect'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='Snowridge-v4'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='cldemote'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='erms'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='gfni'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdir64b'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='movdiri'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='xsaves'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='athlon'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='athlon-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='core2duo'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='core2duo-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='coreduo'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='coreduo-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='n270'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='n270-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='ss'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='phenom'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <blockers model='phenom-v1'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnow'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <feature name='3dnowext'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </blockers>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </mode>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </cpu>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <memoryBacking supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <enum name='sourceType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>file</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>anonymous</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <value>memfd</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  </memoryBacking>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:  <devices>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <disk supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='diskDevice'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>disk</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>cdrom</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>floppy</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>lun</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='bus'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>fdc</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>scsi</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>usb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>sata</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-non-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </disk>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <graphics supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='type'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vnc</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>egl-headless</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>dbus</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </graphics>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <video supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='modelType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>vga</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>cirrus</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>none</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>bochs</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>ramfb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </video>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <hostdev supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='mode'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>subsystem</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='startupPolicy'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>default</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>mandatory</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>requisite</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>optional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='subsysType'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>usb</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>pci</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>scsi</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='capsType'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='pciBackend'/>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    </hostdev>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:    <rng supported='yes'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='model'>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:        <value>virtio-non-transitional</value>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      </enum>
Jan 21 09:03:36 np0005590528 nova_compute[239261]:      <enum name='backendModel'>
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:06:55 np0005590528 rsyslogd[1002]: imjournal: 3588 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 21 09:06:55 np0005590528 podman[242393]: 2026-01-21 14:06:55.80066632 +0000 UTC m=+0.069813453 container create 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:06:55 np0005590528 systemd[1]: Started libpod-conmon-7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279.scope.
Jan 21 09:06:55 np0005590528 podman[242393]: 2026-01-21 14:06:55.774322754 +0000 UTC m=+0.043469957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:06:55 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:06:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:06:55 np0005590528 podman[242393]: 2026-01-21 14:06:55.968894918 +0000 UTC m=+0.238042051 container init 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:06:55 np0005590528 podman[242393]: 2026-01-21 14:06:55.981040485 +0000 UTC m=+0.250187608 container start 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:06:55 np0005590528 podman[242393]: 2026-01-21 14:06:55.98570573 +0000 UTC m=+0.254852873 container attach 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:06:55 np0005590528 xenodochial_ellis[242409]: 167 167
Jan 21 09:06:55 np0005590528 systemd[1]: libpod-7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279.scope: Deactivated successfully.
Jan 21 09:06:55 np0005590528 podman[242393]: 2026-01-21 14:06:55.989747448 +0000 UTC m=+0.258894581 container died 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 09:06:56 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0ee760162e3a04b8f6f820fc43eb925166ec459eaf9fe3e2848e4117a55c88a6-merged.mount: Deactivated successfully.
Jan 21 09:06:56 np0005590528 podman[242393]: 2026-01-21 14:06:56.052286763 +0000 UTC m=+0.321433886 container remove 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:06:56 np0005590528 systemd[1]: libpod-conmon-7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279.scope: Deactivated successfully.
Jan 21 09:06:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:06:56 np0005590528 podman[242433]: 2026-01-21 14:06:56.258710568 +0000 UTC m=+0.065711013 container create 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:06:56 np0005590528 systemd[1]: Started libpod-conmon-7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861.scope.
Jan 21 09:06:56 np0005590528 podman[242433]: 2026-01-21 14:06:56.214128244 +0000 UTC m=+0.021128699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:06:56 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:06:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:56 np0005590528 podman[242433]: 2026-01-21 14:06:56.347509726 +0000 UTC m=+0.154510191 container init 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:06:56 np0005590528 podman[242433]: 2026-01-21 14:06:56.358801893 +0000 UTC m=+0.165802328 container start 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 09:06:56 np0005590528 podman[242433]: 2026-01-21 14:06:56.363353355 +0000 UTC m=+0.170353810 container attach 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 21 09:06:56 np0005590528 friendly_mayer[242450]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:06:56 np0005590528 friendly_mayer[242450]: --> All data devices are unavailable
Jan 21 09:06:56 np0005590528 systemd[1]: libpod-7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861.scope: Deactivated successfully.
Jan 21 09:06:56 np0005590528 podman[242433]: 2026-01-21 14:06:56.985256922 +0000 UTC m=+0.792257427 container died 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:06:57 np0005590528 systemd[1]: var-lib-containers-storage-overlay-32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7-merged.mount: Deactivated successfully.
Jan 21 09:06:57 np0005590528 podman[242433]: 2026-01-21 14:06:57.042702712 +0000 UTC m=+0.849703147 container remove 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:06:57 np0005590528 systemd[1]: libpod-conmon-7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861.scope: Deactivated successfully.
Jan 21 09:06:57 np0005590528 podman[242544]: 2026-01-21 14:06:57.504393279 +0000 UTC m=+0.042143585 container create 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:06:57 np0005590528 systemd[1]: Started libpod-conmon-8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307.scope.
Jan 21 09:06:57 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:06:57 np0005590528 podman[242544]: 2026-01-21 14:06:57.486986102 +0000 UTC m=+0.024736428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:06:57 np0005590528 podman[242544]: 2026-01-21 14:06:57.587124769 +0000 UTC m=+0.124875055 container init 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:06:57 np0005590528 podman[242544]: 2026-01-21 14:06:57.593462115 +0000 UTC m=+0.131212391 container start 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 09:06:57 np0005590528 podman[242544]: 2026-01-21 14:06:57.596740155 +0000 UTC m=+0.134490421 container attach 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:06:57 np0005590528 systemd[1]: libpod-8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307.scope: Deactivated successfully.
Jan 21 09:06:57 np0005590528 tender_brown[242561]: 167 167
Jan 21 09:06:57 np0005590528 conmon[242561]: conmon 8e475f7a6301f94f2ea1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307.scope/container/memory.events
Jan 21 09:06:57 np0005590528 podman[242544]: 2026-01-21 14:06:57.598363364 +0000 UTC m=+0.136113640 container died 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:06:57 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6590edf33a4743fcc22988c4eba794ed0b6df0b77f2fc6b6068fca0f43750698-merged.mount: Deactivated successfully.
Jan 21 09:06:57 np0005590528 podman[242544]: 2026-01-21 14:06:57.63771101 +0000 UTC m=+0.175461286 container remove 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 09:06:57 np0005590528 systemd[1]: libpod-conmon-8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307.scope: Deactivated successfully.
Jan 21 09:06:57 np0005590528 podman[242583]: 2026-01-21 14:06:57.875044993 +0000 UTC m=+0.073696800 container create c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 09:06:57 np0005590528 systemd[1]: Started libpod-conmon-c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7.scope.
Jan 21 09:06:57 np0005590528 podman[242583]: 2026-01-21 14:06:57.843073708 +0000 UTC m=+0.041725595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:06:57 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:06:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8258b65e87196343693ece3622252a999f682a8a081df63ca5bf00dde223fd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8258b65e87196343693ece3622252a999f682a8a081df63ca5bf00dde223fd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8258b65e87196343693ece3622252a999f682a8a081df63ca5bf00dde223fd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8258b65e87196343693ece3622252a999f682a8a081df63ca5bf00dde223fd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:57 np0005590528 podman[242583]: 2026-01-21 14:06:57.976886021 +0000 UTC m=+0.175537878 container init c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:06:57 np0005590528 podman[242583]: 2026-01-21 14:06:57.987103372 +0000 UTC m=+0.185755199 container start c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:06:57 np0005590528 podman[242583]: 2026-01-21 14:06:57.992401282 +0000 UTC m=+0.191053109 container attach c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 09:06:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]: {
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:    "0": [
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:        {
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "devices": [
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "/dev/loop3"
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            ],
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_name": "ceph_lv0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_size": "21470642176",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "name": "ceph_lv0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "tags": {
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.cluster_name": "ceph",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.crush_device_class": "",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.encrypted": "0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.objectstore": "bluestore",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.osd_id": "0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.type": "block",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.vdo": "0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.with_tpm": "0"
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            },
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "type": "block",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "vg_name": "ceph_vg0"
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:        }
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:    ],
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:    "1": [
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:        {
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "devices": [
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "/dev/loop4"
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            ],
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_name": "ceph_lv1",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_size": "21470642176",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "name": "ceph_lv1",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "tags": {
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.cluster_name": "ceph",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.crush_device_class": "",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.encrypted": "0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.objectstore": "bluestore",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.osd_id": "1",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.type": "block",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.vdo": "0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.with_tpm": "0"
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            },
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "type": "block",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "vg_name": "ceph_vg1"
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:        }
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:    ],
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:    "2": [
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:        {
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "devices": [
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "/dev/loop5"
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            ],
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_name": "ceph_lv2",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_size": "21470642176",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "name": "ceph_lv2",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "tags": {
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.cluster_name": "ceph",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.crush_device_class": "",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.encrypted": "0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.objectstore": "bluestore",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.osd_id": "2",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.type": "block",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.vdo": "0",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:                "ceph.with_tpm": "0"
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            },
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "type": "block",
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:            "vg_name": "ceph_vg2"
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:        }
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]:    ]
Jan 21 09:06:58 np0005590528 competent_mendeleev[242600]: }
Jan 21 09:06:58 np0005590528 systemd[1]: libpod-c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7.scope: Deactivated successfully.
Jan 21 09:06:58 np0005590528 podman[242583]: 2026-01-21 14:06:58.334608377 +0000 UTC m=+0.533260234 container died c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:06:58 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d8258b65e87196343693ece3622252a999f682a8a081df63ca5bf00dde223fd6-merged.mount: Deactivated successfully.
Jan 21 09:06:58 np0005590528 podman[242583]: 2026-01-21 14:06:58.381934699 +0000 UTC m=+0.580586486 container remove c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 09:06:58 np0005590528 systemd[1]: libpod-conmon-c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7.scope: Deactivated successfully.
Jan 21 09:06:58 np0005590528 podman[242684]: 2026-01-21 14:06:58.875063777 +0000 UTC m=+0.065363665 container create 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 09:06:58 np0005590528 systemd[1]: Started libpod-conmon-458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8.scope.
Jan 21 09:06:58 np0005590528 podman[242684]: 2026-01-21 14:06:58.848217658 +0000 UTC m=+0.038517626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:06:58 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:06:58 np0005590528 podman[242684]: 2026-01-21 14:06:58.970432126 +0000 UTC m=+0.160732044 container init 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:06:58 np0005590528 podman[242684]: 2026-01-21 14:06:58.982893252 +0000 UTC m=+0.173193130 container start 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 09:06:58 np0005590528 amazing_mayer[242700]: 167 167
Jan 21 09:06:58 np0005590528 systemd[1]: libpod-458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8.scope: Deactivated successfully.
Jan 21 09:06:58 np0005590528 podman[242684]: 2026-01-21 14:06:58.987038904 +0000 UTC m=+0.177338892 container attach 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 09:06:58 np0005590528 podman[242684]: 2026-01-21 14:06:58.987733612 +0000 UTC m=+0.178033490 container died 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 09:06:59 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9cd080c3785eb7b9ceb7ac3b09a7c826d06d65a72f9cd61ff09a025e1926df19-merged.mount: Deactivated successfully.
Jan 21 09:06:59 np0005590528 podman[242684]: 2026-01-21 14:06:59.033689578 +0000 UTC m=+0.223989456 container remove 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:06:59 np0005590528 systemd[1]: libpod-conmon-458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8.scope: Deactivated successfully.
Jan 21 09:06:59 np0005590528 podman[242724]: 2026-01-21 14:06:59.232212359 +0000 UTC m=+0.071877614 container create e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 09:06:59 np0005590528 systemd[1]: Started libpod-conmon-e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2.scope.
Jan 21 09:06:59 np0005590528 podman[242724]: 2026-01-21 14:06:59.199787514 +0000 UTC m=+0.039452779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:06:59 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:06:59 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19a8dc2abd20c119fa4d04ba39c39898108eb8b0a584f6ba75e17bdf83d49bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:59 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19a8dc2abd20c119fa4d04ba39c39898108eb8b0a584f6ba75e17bdf83d49bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:59 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19a8dc2abd20c119fa4d04ba39c39898108eb8b0a584f6ba75e17bdf83d49bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:59 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19a8dc2abd20c119fa4d04ba39c39898108eb8b0a584f6ba75e17bdf83d49bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:06:59 np0005590528 podman[242724]: 2026-01-21 14:06:59.328250846 +0000 UTC m=+0.167916081 container init e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:06:59 np0005590528 podman[242724]: 2026-01-21 14:06:59.334688963 +0000 UTC m=+0.174354178 container start e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:06:59 np0005590528 podman[242724]: 2026-01-21 14:06:59.338877586 +0000 UTC m=+0.178542811 container attach e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:06:59 np0005590528 lvm[242820]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:06:59 np0005590528 lvm[242820]: VG ceph_vg1 finished
Jan 21 09:06:59 np0005590528 lvm[242819]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:06:59 np0005590528 lvm[242819]: VG ceph_vg0 finished
Jan 21 09:07:00 np0005590528 lvm[242822]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:07:00 np0005590528 lvm[242822]: VG ceph_vg2 finished
Jan 21 09:07:00 np0005590528 magical_goodall[242741]: {}
Jan 21 09:07:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:00 np0005590528 podman[242724]: 2026-01-21 14:07:00.146169522 +0000 UTC m=+0.985834747 container died e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 09:07:00 np0005590528 systemd[1]: libpod-e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2.scope: Deactivated successfully.
Jan 21 09:07:00 np0005590528 systemd[1]: libpod-e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2.scope: Consumed 1.317s CPU time.
Jan 21 09:07:00 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b19a8dc2abd20c119fa4d04ba39c39898108eb8b0a584f6ba75e17bdf83d49bc-merged.mount: Deactivated successfully.
Jan 21 09:07:00 np0005590528 podman[242724]: 2026-01-21 14:07:00.348428244 +0000 UTC m=+1.188093469 container remove e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 09:07:00 np0005590528 systemd[1]: libpod-conmon-e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2.scope: Deactivated successfully.
Jan 21 09:07:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:07:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:07:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:07:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:07:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:07:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:07:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:07:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:07:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:07:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:07:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:07:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:07:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:07:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3079051532' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:07:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:07:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3079051532' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:07:23 np0005590528 podman[242863]: 2026-01-21 14:07:23.344770218 +0000 UTC m=+0.069149358 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 09:07:23 np0005590528 podman[242862]: 2026-01-21 14:07:23.374645184 +0000 UTC m=+0.098098551 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 21 09:07:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:07:33.897 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:07:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:07:33.898 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:07:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:07:33.898 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:07:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:35 np0005590528 nova_compute[239261]: 2026-01-21 14:07:35.719 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:07:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.749 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.749 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.750 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.750 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.775 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.776 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.776 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.777 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:07:37 np0005590528 nova_compute[239261]: 2026-01-21 14:07:37.777 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:07:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:07:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1761831223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:07:38 np0005590528 nova_compute[239261]: 2026-01-21 14:07:38.336 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:07:38 np0005590528 nova_compute[239261]: 2026-01-21 14:07:38.517 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:07:38 np0005590528 nova_compute[239261]: 2026-01-21 14:07:38.519 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:07:38 np0005590528 nova_compute[239261]: 2026-01-21 14:07:38.519 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:07:38 np0005590528 nova_compute[239261]: 2026-01-21 14:07:38.520 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:07:38 np0005590528 nova_compute[239261]: 2026-01-21 14:07:38.616 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:07:38 np0005590528 nova_compute[239261]: 2026-01-21 14:07:38.617 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:07:38 np0005590528 nova_compute[239261]: 2026-01-21 14:07:38.635 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:07:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:07:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282412038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:07:39 np0005590528 nova_compute[239261]: 2026-01-21 14:07:39.187 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:07:39 np0005590528 nova_compute[239261]: 2026-01-21 14:07:39.193 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:07:39 np0005590528 nova_compute[239261]: 2026-01-21 14:07:39.240 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:07:39 np0005590528 nova_compute[239261]: 2026-01-21 14:07:39.243 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:07:39 np0005590528 nova_compute[239261]: 2026-01-21 14:07:39.243 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:07:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:07:39
Jan 21 09:07:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:07:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:07:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Jan 21 09:07:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:07:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:40 np0005590528 nova_compute[239261]: 2026-01-21 14:07:40.218 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:07:40 np0005590528 nova_compute[239261]: 2026-01-21 14:07:40.219 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:07:40 np0005590528 nova_compute[239261]: 2026-01-21 14:07:40.219 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:07:40 np0005590528 nova_compute[239261]: 2026-01-21 14:07:40.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:07:40 np0005590528 nova_compute[239261]: 2026-01-21 14:07:40.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:07:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:07:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:07:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:07:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:07:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:54 np0005590528 podman[242953]: 2026-01-21 14:07:54.379432145 +0000 UTC m=+0.097424494 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 09:07:54 np0005590528 podman[242952]: 2026-01-21 14:07:54.389239079 +0000 UTC m=+0.105429333 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 09:07:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:07:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:07:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:08:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:08:01 np0005590528 podman[243142]: 2026-01-21 14:08:01.944378818 +0000 UTC m=+0.071067436 container create 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 09:08:01 np0005590528 systemd[1]: Started libpod-conmon-1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9.scope.
Jan 21 09:08:02 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:08:02 np0005590528 podman[243142]: 2026-01-21 14:08:01.919803105 +0000 UTC m=+0.046491743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:08:02 np0005590528 podman[243142]: 2026-01-21 14:08:02.0333759 +0000 UTC m=+0.160064518 container init 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:08:02 np0005590528 podman[243142]: 2026-01-21 14:08:02.042235161 +0000 UTC m=+0.168923759 container start 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:08:02 np0005590528 podman[243142]: 2026-01-21 14:08:02.045807211 +0000 UTC m=+0.172495789 container attach 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 09:08:02 np0005590528 strange_blackwell[243158]: 167 167
Jan 21 09:08:02 np0005590528 systemd[1]: libpod-1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9.scope: Deactivated successfully.
Jan 21 09:08:02 np0005590528 podman[243142]: 2026-01-21 14:08:02.049373019 +0000 UTC m=+0.176061607 container died 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 09:08:02 np0005590528 systemd[1]: var-lib-containers-storage-overlay-76fd31376504dff5d799e2649246d3948fa5cfa3028ae644c4734b629d82eea8-merged.mount: Deactivated successfully.
Jan 21 09:08:02 np0005590528 podman[243142]: 2026-01-21 14:08:02.097802359 +0000 UTC m=+0.224490937 container remove 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 21 09:08:02 np0005590528 systemd[1]: libpod-conmon-1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9.scope: Deactivated successfully.
Jan 21 09:08:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:02 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:08:02 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:08:02 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:08:02 np0005590528 podman[243179]: 2026-01-21 14:08:02.346273923 +0000 UTC m=+0.070668115 container create 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:08:02 np0005590528 systemd[1]: Started libpod-conmon-45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9.scope.
Jan 21 09:08:02 np0005590528 podman[243179]: 2026-01-21 14:08:02.314402457 +0000 UTC m=+0.038796779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:08:02 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:08:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:02 np0005590528 podman[243179]: 2026-01-21 14:08:02.458230908 +0000 UTC m=+0.182625180 container init 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 09:08:02 np0005590528 podman[243179]: 2026-01-21 14:08:02.468502705 +0000 UTC m=+0.192896897 container start 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 09:08:02 np0005590528 podman[243179]: 2026-01-21 14:08:02.474138695 +0000 UTC m=+0.198532887 container attach 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:08:02 np0005590528 heuristic_williamson[243196]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:08:02 np0005590528 heuristic_williamson[243196]: --> All data devices are unavailable
Jan 21 09:08:02 np0005590528 systemd[1]: libpod-45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9.scope: Deactivated successfully.
Jan 21 09:08:02 np0005590528 podman[243179]: 2026-01-21 14:08:02.94128533 +0000 UTC m=+0.665679522 container died 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 21 09:08:02 np0005590528 systemd[1]: var-lib-containers-storage-overlay-f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e-merged.mount: Deactivated successfully.
Jan 21 09:08:02 np0005590528 podman[243179]: 2026-01-21 14:08:02.984310645 +0000 UTC m=+0.708704837 container remove 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:08:02 np0005590528 systemd[1]: libpod-conmon-45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9.scope: Deactivated successfully.
Jan 21 09:08:03 np0005590528 podman[243288]: 2026-01-21 14:08:03.532468861 +0000 UTC m=+0.071027284 container create b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 09:08:03 np0005590528 systemd[1]: Started libpod-conmon-b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5.scope.
Jan 21 09:08:03 np0005590528 podman[243288]: 2026-01-21 14:08:03.503520449 +0000 UTC m=+0.042078922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:08:03 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:08:03 np0005590528 podman[243288]: 2026-01-21 14:08:03.637130685 +0000 UTC m=+0.175689108 container init b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:08:03 np0005590528 podman[243288]: 2026-01-21 14:08:03.644338025 +0000 UTC m=+0.182896408 container start b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 21 09:08:03 np0005590528 busy_stonebraker[243304]: 167 167
Jan 21 09:08:03 np0005590528 systemd[1]: libpod-b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5.scope: Deactivated successfully.
Jan 21 09:08:03 np0005590528 podman[243288]: 2026-01-21 14:08:03.648809846 +0000 UTC m=+0.187368339 container attach b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 09:08:03 np0005590528 podman[243288]: 2026-01-21 14:08:03.650301673 +0000 UTC m=+0.188860096 container died b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 09:08:03 np0005590528 systemd[1]: var-lib-containers-storage-overlay-490c098f01333b9c875593af7ff61ce2b44c1531fb61ea554d5f29ebf5e0a540-merged.mount: Deactivated successfully.
Jan 21 09:08:03 np0005590528 podman[243288]: 2026-01-21 14:08:03.715041781 +0000 UTC m=+0.253600204 container remove b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 09:08:03 np0005590528 systemd[1]: libpod-conmon-b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5.scope: Deactivated successfully.
Jan 21 09:08:03 np0005590528 podman[243330]: 2026-01-21 14:08:03.949769571 +0000 UTC m=+0.075073285 container create 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 09:08:03 np0005590528 systemd[1]: Started libpod-conmon-63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd.scope.
Jan 21 09:08:04 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:08:04 np0005590528 podman[243330]: 2026-01-21 14:08:03.919413314 +0000 UTC m=+0.044717118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:08:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1bac90e5328fe5c1b21c3fd87e6596d40ebe4656370c39b564cfb4ac00976d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1bac90e5328fe5c1b21c3fd87e6596d40ebe4656370c39b564cfb4ac00976d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1bac90e5328fe5c1b21c3fd87e6596d40ebe4656370c39b564cfb4ac00976d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1bac90e5328fe5c1b21c3fd87e6596d40ebe4656370c39b564cfb4ac00976d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:04 np0005590528 podman[243330]: 2026-01-21 14:08:04.039115982 +0000 UTC m=+0.164419726 container init 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:08:04 np0005590528 podman[243330]: 2026-01-21 14:08:04.04581675 +0000 UTC m=+0.171120464 container start 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 09:08:04 np0005590528 podman[243330]: 2026-01-21 14:08:04.049196304 +0000 UTC m=+0.174500048 container attach 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:08:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:04 np0005590528 great_shaw[243346]: {
Jan 21 09:08:04 np0005590528 great_shaw[243346]:    "0": [
Jan 21 09:08:04 np0005590528 great_shaw[243346]:        {
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "devices": [
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "/dev/loop3"
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            ],
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_name": "ceph_lv0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_size": "21470642176",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "name": "ceph_lv0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "tags": {
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.cluster_name": "ceph",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.crush_device_class": "",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.encrypted": "0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.objectstore": "bluestore",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.osd_id": "0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.type": "block",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.vdo": "0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.with_tpm": "0"
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            },
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "type": "block",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "vg_name": "ceph_vg0"
Jan 21 09:08:04 np0005590528 great_shaw[243346]:        }
Jan 21 09:08:04 np0005590528 great_shaw[243346]:    ],
Jan 21 09:08:04 np0005590528 great_shaw[243346]:    "1": [
Jan 21 09:08:04 np0005590528 great_shaw[243346]:        {
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "devices": [
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "/dev/loop4"
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            ],
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_name": "ceph_lv1",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_size": "21470642176",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "name": "ceph_lv1",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "tags": {
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.cluster_name": "ceph",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.crush_device_class": "",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.encrypted": "0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.objectstore": "bluestore",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.osd_id": "1",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.type": "block",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.vdo": "0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.with_tpm": "0"
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            },
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "type": "block",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "vg_name": "ceph_vg1"
Jan 21 09:08:04 np0005590528 great_shaw[243346]:        }
Jan 21 09:08:04 np0005590528 great_shaw[243346]:    ],
Jan 21 09:08:04 np0005590528 great_shaw[243346]:    "2": [
Jan 21 09:08:04 np0005590528 great_shaw[243346]:        {
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "devices": [
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "/dev/loop5"
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            ],
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_name": "ceph_lv2",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_size": "21470642176",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "name": "ceph_lv2",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "tags": {
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.cluster_name": "ceph",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.crush_device_class": "",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.encrypted": "0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.objectstore": "bluestore",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.osd_id": "2",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.type": "block",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.vdo": "0",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:                "ceph.with_tpm": "0"
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            },
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "type": "block",
Jan 21 09:08:04 np0005590528 great_shaw[243346]:            "vg_name": "ceph_vg2"
Jan 21 09:08:04 np0005590528 great_shaw[243346]:        }
Jan 21 09:08:04 np0005590528 great_shaw[243346]:    ]
Jan 21 09:08:04 np0005590528 great_shaw[243346]: }
Jan 21 09:08:04 np0005590528 systemd[1]: libpod-63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd.scope: Deactivated successfully.
Jan 21 09:08:04 np0005590528 podman[243355]: 2026-01-21 14:08:04.414452834 +0000 UTC m=+0.026693587 container died 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 09:08:04 np0005590528 systemd[1]: var-lib-containers-storage-overlay-8b1bac90e5328fe5c1b21c3fd87e6596d40ebe4656370c39b564cfb4ac00976d-merged.mount: Deactivated successfully.
Jan 21 09:08:04 np0005590528 podman[243355]: 2026-01-21 14:08:04.474268708 +0000 UTC m=+0.086509431 container remove 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 21 09:08:04 np0005590528 systemd[1]: libpod-conmon-63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd.scope: Deactivated successfully.
Jan 21 09:08:05 np0005590528 podman[243431]: 2026-01-21 14:08:05.063155782 +0000 UTC m=+0.067432485 container create cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 09:08:05 np0005590528 systemd[1]: Started libpod-conmon-cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5.scope.
Jan 21 09:08:05 np0005590528 podman[243431]: 2026-01-21 14:08:05.036838095 +0000 UTC m=+0.041114818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:08:05 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:08:05 np0005590528 podman[243431]: 2026-01-21 14:08:05.172716958 +0000 UTC m=+0.176993681 container init cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:08:05 np0005590528 podman[243431]: 2026-01-21 14:08:05.187609449 +0000 UTC m=+0.191886142 container start cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 09:08:05 np0005590528 podman[243431]: 2026-01-21 14:08:05.192512532 +0000 UTC m=+0.196789245 container attach cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 09:08:05 np0005590528 upbeat_golick[243448]: 167 167
Jan 21 09:08:05 np0005590528 systemd[1]: libpod-cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5.scope: Deactivated successfully.
Jan 21 09:08:05 np0005590528 podman[243431]: 2026-01-21 14:08:05.194768859 +0000 UTC m=+0.199045572 container died cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:08:05 np0005590528 systemd[1]: var-lib-containers-storage-overlay-aba33f38f9567959acb77ac2c498a2a7f22dec7764ecacd933fa01fb889e05cf-merged.mount: Deactivated successfully.
Jan 21 09:08:05 np0005590528 podman[243431]: 2026-01-21 14:08:05.255147496 +0000 UTC m=+0.259424179 container remove cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:08:05 np0005590528 systemd[1]: libpod-conmon-cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5.scope: Deactivated successfully.
Jan 21 09:08:05 np0005590528 podman[243471]: 2026-01-21 14:08:05.443891868 +0000 UTC m=+0.055109087 container create e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:08:05 np0005590528 systemd[1]: Started libpod-conmon-e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392.scope.
Jan 21 09:08:05 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:08:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7474309ca22c6403b57d36ad8c337c3bce06d88ff1491c5cf77f10d54901e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7474309ca22c6403b57d36ad8c337c3bce06d88ff1491c5cf77f10d54901e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7474309ca22c6403b57d36ad8c337c3bce06d88ff1491c5cf77f10d54901e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:05 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7474309ca22c6403b57d36ad8c337c3bce06d88ff1491c5cf77f10d54901e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:08:05 np0005590528 podman[243471]: 2026-01-21 14:08:05.426978476 +0000 UTC m=+0.038195675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:08:05 np0005590528 podman[243471]: 2026-01-21 14:08:05.52524266 +0000 UTC m=+0.136459889 container init e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 09:08:05 np0005590528 podman[243471]: 2026-01-21 14:08:05.530568292 +0000 UTC m=+0.141785501 container start e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 21 09:08:05 np0005590528 podman[243471]: 2026-01-21 14:08:05.535820254 +0000 UTC m=+0.147037473 container attach e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 09:08:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:06 np0005590528 lvm[243569]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:08:06 np0005590528 lvm[243569]: VG ceph_vg2 finished
Jan 21 09:08:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:06 np0005590528 lvm[243567]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:08:06 np0005590528 lvm[243567]: VG ceph_vg1 finished
Jan 21 09:08:06 np0005590528 lvm[243566]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:08:06 np0005590528 lvm[243566]: VG ceph_vg0 finished
Jan 21 09:08:06 np0005590528 musing_lichterman[243487]: {}
Jan 21 09:08:06 np0005590528 systemd[1]: libpod-e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392.scope: Deactivated successfully.
Jan 21 09:08:06 np0005590528 systemd[1]: libpod-e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392.scope: Consumed 1.213s CPU time.
Jan 21 09:08:06 np0005590528 podman[243471]: 2026-01-21 14:08:06.291135574 +0000 UTC m=+0.902352773 container died e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Jan 21 09:08:06 np0005590528 systemd[1]: var-lib-containers-storage-overlay-62d7474309ca22c6403b57d36ad8c337c3bce06d88ff1491c5cf77f10d54901e-merged.mount: Deactivated successfully.
Jan 21 09:08:06 np0005590528 podman[243471]: 2026-01-21 14:08:06.334767224 +0000 UTC m=+0.945984433 container remove e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:08:06 np0005590528 systemd[1]: libpod-conmon-e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392.scope: Deactivated successfully.
Jan 21 09:08:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:08:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:08:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:08:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:08:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:08:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:08:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:08:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:08:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:08:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:08:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:08:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:08:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:08:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4164572454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:08:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:08:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4164572454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:08:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:25 np0005590528 podman[243609]: 2026-01-21 14:08:25.367083977 +0000 UTC m=+0.086466251 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:08:25 np0005590528 podman[243608]: 2026-01-21 14:08:25.367981329 +0000 UTC m=+0.087371833 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 21 09:08:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:08:33.898 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:08:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:08:33.898 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:08:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:08:33.899 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:08:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:35 np0005590528 nova_compute[239261]: 2026-01-21 14:08:35.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:35 np0005590528 nova_compute[239261]: 2026-01-21 14:08:35.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 21 09:08:35 np0005590528 nova_compute[239261]: 2026-01-21 14:08:35.745 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 21 09:08:35 np0005590528 nova_compute[239261]: 2026-01-21 14:08:35.746 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:35 np0005590528 nova_compute[239261]: 2026-01-21 14:08:35.747 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 21 09:08:35 np0005590528 nova_compute[239261]: 2026-01-21 14:08:35.775 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:38 np0005590528 nova_compute[239261]: 2026-01-21 14:08:38.793 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:38 np0005590528 nova_compute[239261]: 2026-01-21 14:08:38.794 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:08:38 np0005590528 nova_compute[239261]: 2026-01-21 14:08:38.794 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:08:38 np0005590528 nova_compute[239261]: 2026-01-21 14:08:38.996 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:08:38 np0005590528 nova_compute[239261]: 2026-01-21 14:08:38.996 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:38 np0005590528 nova_compute[239261]: 2026-01-21 14:08:38.997 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:08:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:08:39
Jan 21 09:08:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:08:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:08:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 21 09:08:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:08:39 np0005590528 nova_compute[239261]: 2026-01-21 14:08:39.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:39 np0005590528 nova_compute[239261]: 2026-01-21 14:08:39.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:39 np0005590528 nova_compute[239261]: 2026-01-21 14:08:39.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:40 np0005590528 nova_compute[239261]: 2026-01-21 14:08:40.182 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:08:40 np0005590528 nova_compute[239261]: 2026-01-21 14:08:40.184 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:08:40 np0005590528 nova_compute[239261]: 2026-01-21 14:08:40.184 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:08:40 np0005590528 nova_compute[239261]: 2026-01-21 14:08:40.184 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:08:40 np0005590528 nova_compute[239261]: 2026-01-21 14:08:40.185 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:08:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:08:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867713952' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:08:40 np0005590528 nova_compute[239261]: 2026-01-21 14:08:40.757 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:08:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:41 np0005590528 nova_compute[239261]: 2026-01-21 14:08:41.002 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:08:41 np0005590528 nova_compute[239261]: 2026-01-21 14:08:41.003 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:08:41 np0005590528 nova_compute[239261]: 2026-01-21 14:08:41.004 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:08:41 np0005590528 nova_compute[239261]: 2026-01-21 14:08:41.004 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:08:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:08:41 np0005590528 nova_compute[239261]: 2026-01-21 14:08:41.621 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:08:41 np0005590528 nova_compute[239261]: 2026-01-21 14:08:41.622 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:08:41 np0005590528 nova_compute[239261]: 2026-01-21 14:08:41.754 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing inventories for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 21 09:08:42 np0005590528 nova_compute[239261]: 2026-01-21 14:08:42.040 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating ProviderTree inventory for provider 172aa181-ce4f-4953-808e-b8a26e60249f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 21 09:08:42 np0005590528 nova_compute[239261]: 2026-01-21 14:08:42.040 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating inventory in ProviderTree for provider 172aa181-ce4f-4953-808e-b8a26e60249f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 21 09:08:42 np0005590528 nova_compute[239261]: 2026-01-21 14:08:42.057 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing aggregate associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 21 09:08:42 np0005590528 nova_compute[239261]: 2026-01-21 14:08:42.083 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing trait associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 21 09:08:42 np0005590528 nova_compute[239261]: 2026-01-21 14:08:42.097 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:08:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:08:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1151325852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:08:42 np0005590528 nova_compute[239261]: 2026-01-21 14:08:42.683 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:08:42 np0005590528 nova_compute[239261]: 2026-01-21 14:08:42.691 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:08:43 np0005590528 nova_compute[239261]: 2026-01-21 14:08:43.079 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:08:43 np0005590528 nova_compute[239261]: 2026-01-21 14:08:43.082 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:08:43 np0005590528 nova_compute[239261]: 2026-01-21 14:08:43.083 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:08:44 np0005590528 nova_compute[239261]: 2026-01-21 14:08:44.083 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:44 np0005590528 nova_compute[239261]: 2026-01-21 14:08:44.083 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:44 np0005590528 nova_compute[239261]: 2026-01-21 14:08:44.084 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:44 np0005590528 nova_compute[239261]: 2026-01-21 14:08:44.084 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:08:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:08:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:08:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:08:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:08:56 np0005590528 podman[243697]: 2026-01-21 14:08:56.358391757 +0000 UTC m=+0.074789718 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 09:08:56 np0005590528 podman[243696]: 2026-01-21 14:08:56.412703064 +0000 UTC m=+0.127611098 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 09:08:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:09:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:09:07 np0005590528 podman[243883]: 2026-01-21 14:09:07.831331786 +0000 UTC m=+0.054928362 container create a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 09:09:07 np0005590528 systemd[1]: Started libpod-conmon-a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a.scope.
Jan 21 09:09:07 np0005590528 podman[243883]: 2026-01-21 14:09:07.807068751 +0000 UTC m=+0.030665307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:09:07 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:09:07 np0005590528 podman[243883]: 2026-01-21 14:09:07.933526768 +0000 UTC m=+0.157123334 container init a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:09:07 np0005590528 podman[243883]: 2026-01-21 14:09:07.944189084 +0000 UTC m=+0.167785670 container start a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 09:09:07 np0005590528 podman[243883]: 2026-01-21 14:09:07.950293997 +0000 UTC m=+0.173890583 container attach a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 09:09:07 np0005590528 naughty_boyd[243899]: 167 167
Jan 21 09:09:07 np0005590528 systemd[1]: libpod-a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a.scope: Deactivated successfully.
Jan 21 09:09:07 np0005590528 podman[243883]: 2026-01-21 14:09:07.953379664 +0000 UTC m=+0.176976210 container died a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 09:09:07 np0005590528 systemd[1]: var-lib-containers-storage-overlay-30fc648982a3f816e04d251bdf2a3d06111deabe1de657f6607af7fb3212e86b-merged.mount: Deactivated successfully.
Jan 21 09:09:07 np0005590528 podman[243883]: 2026-01-21 14:09:07.995108486 +0000 UTC m=+0.218705032 container remove a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:09:08 np0005590528 systemd[1]: libpod-conmon-a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a.scope: Deactivated successfully.
Jan 21 09:09:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:08 np0005590528 podman[243923]: 2026-01-21 14:09:08.239935509 +0000 UTC m=+0.076247825 container create cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 09:09:08 np0005590528 systemd[1]: Started libpod-conmon-cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae.scope.
Jan 21 09:09:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:09:08 np0005590528 podman[243923]: 2026-01-21 14:09:08.211359346 +0000 UTC m=+0.047671712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:09:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:09:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:09:08 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:09:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:08 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:08 np0005590528 podman[243923]: 2026-01-21 14:09:08.344801777 +0000 UTC m=+0.181114133 container init cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 09:09:08 np0005590528 podman[243923]: 2026-01-21 14:09:08.357093634 +0000 UTC m=+0.193405950 container start cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:09:08 np0005590528 podman[243923]: 2026-01-21 14:09:08.362162901 +0000 UTC m=+0.198475207 container attach cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 09:09:08 np0005590528 goofy_kalam[243939]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:09:08 np0005590528 goofy_kalam[243939]: --> All data devices are unavailable
Jan 21 09:09:08 np0005590528 systemd[1]: libpod-cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae.scope: Deactivated successfully.
Jan 21 09:09:08 np0005590528 podman[243923]: 2026-01-21 14:09:08.987624838 +0000 UTC m=+0.823937134 container died cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 09:09:09 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745-merged.mount: Deactivated successfully.
Jan 21 09:09:09 np0005590528 podman[243923]: 2026-01-21 14:09:09.040290984 +0000 UTC m=+0.876603270 container remove cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 09:09:09 np0005590528 systemd[1]: libpod-conmon-cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae.scope: Deactivated successfully.
Jan 21 09:09:09 np0005590528 podman[244034]: 2026-01-21 14:09:09.702927739 +0000 UTC m=+0.060715547 container create 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:09:09 np0005590528 systemd[1]: Started libpod-conmon-8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8.scope.
Jan 21 09:09:09 np0005590528 podman[244034]: 2026-01-21 14:09:09.674245583 +0000 UTC m=+0.032033431 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:09:09 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:09:09 np0005590528 podman[244034]: 2026-01-21 14:09:09.792845934 +0000 UTC m=+0.150633782 container init 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 09:09:09 np0005590528 podman[244034]: 2026-01-21 14:09:09.803673675 +0000 UTC m=+0.161461473 container start 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:09:09 np0005590528 podman[244034]: 2026-01-21 14:09:09.807400458 +0000 UTC m=+0.165188316 container attach 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 09:09:09 np0005590528 hopeful_nash[244051]: 167 167
Jan 21 09:09:09 np0005590528 systemd[1]: libpod-8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8.scope: Deactivated successfully.
Jan 21 09:09:09 np0005590528 conmon[244051]: conmon 8e3f9ad598518adb572f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8.scope/container/memory.events
Jan 21 09:09:09 np0005590528 podman[244034]: 2026-01-21 14:09:09.813303775 +0000 UTC m=+0.171091593 container died 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 09:09:09 np0005590528 systemd[1]: var-lib-containers-storage-overlay-231c2bcf1798cfdac306f90548fc996355c94ac40489cc6d879ccac1c9171b07-merged.mount: Deactivated successfully.
Jan 21 09:09:09 np0005590528 podman[244034]: 2026-01-21 14:09:09.863343565 +0000 UTC m=+0.221131363 container remove 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:09:09 np0005590528 systemd[1]: libpod-conmon-8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8.scope: Deactivated successfully.
Jan 21 09:09:10 np0005590528 podman[244075]: 2026-01-21 14:09:10.089206484 +0000 UTC m=+0.047378864 container create 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 09:09:10 np0005590528 systemd[1]: Started libpod-conmon-01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2.scope.
Jan 21 09:09:10 np0005590528 podman[244075]: 2026-01-21 14:09:10.068886227 +0000 UTC m=+0.027058637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:09:10 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:09:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c4d759ac7386c9e5b3afe9527ef502ae0f59592e163cf811adbb7ddfa0d16e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c4d759ac7386c9e5b3afe9527ef502ae0f59592e163cf811adbb7ddfa0d16e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c4d759ac7386c9e5b3afe9527ef502ae0f59592e163cf811adbb7ddfa0d16e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c4d759ac7386c9e5b3afe9527ef502ae0f59592e163cf811adbb7ddfa0d16e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:10 np0005590528 podman[244075]: 2026-01-21 14:09:10.199754295 +0000 UTC m=+0.157926745 container init 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 09:09:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:10 np0005590528 podman[244075]: 2026-01-21 14:09:10.211519188 +0000 UTC m=+0.169691578 container start 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 21 09:09:10 np0005590528 podman[244075]: 2026-01-21 14:09:10.216278977 +0000 UTC m=+0.174451437 container attach 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]: {
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:    "0": [
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:        {
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "devices": [
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "/dev/loop3"
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            ],
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_name": "ceph_lv0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_size": "21470642176",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "name": "ceph_lv0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "tags": {
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.cluster_name": "ceph",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.crush_device_class": "",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.encrypted": "0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.objectstore": "bluestore",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.osd_id": "0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.type": "block",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.vdo": "0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.with_tpm": "0"
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            },
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "type": "block",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "vg_name": "ceph_vg0"
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:        }
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:    ],
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:    "1": [
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:        {
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "devices": [
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "/dev/loop4"
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            ],
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_name": "ceph_lv1",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_size": "21470642176",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "name": "ceph_lv1",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "tags": {
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.cluster_name": "ceph",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.crush_device_class": "",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.encrypted": "0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.objectstore": "bluestore",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.osd_id": "1",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.type": "block",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.vdo": "0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.with_tpm": "0"
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            },
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "type": "block",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "vg_name": "ceph_vg1"
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:        }
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:    ],
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:    "2": [
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:        {
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "devices": [
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "/dev/loop5"
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            ],
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_name": "ceph_lv2",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_size": "21470642176",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "name": "ceph_lv2",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "tags": {
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.cluster_name": "ceph",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.crush_device_class": "",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.encrypted": "0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.objectstore": "bluestore",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.osd_id": "2",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.type": "block",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.vdo": "0",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:                "ceph.with_tpm": "0"
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            },
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "type": "block",
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:            "vg_name": "ceph_vg2"
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:        }
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]:    ]
Jan 21 09:09:10 np0005590528 cranky_mclean[244092]: }
Jan 21 09:09:10 np0005590528 systemd[1]: libpod-01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2.scope: Deactivated successfully.
Jan 21 09:09:10 np0005590528 podman[244075]: 2026-01-21 14:09:10.535248462 +0000 UTC m=+0.493420812 container died 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 09:09:10 np0005590528 systemd[1]: var-lib-containers-storage-overlay-97c4d759ac7386c9e5b3afe9527ef502ae0f59592e163cf811adbb7ddfa0d16e-merged.mount: Deactivated successfully.
Jan 21 09:09:10 np0005590528 podman[244075]: 2026-01-21 14:09:10.590996793 +0000 UTC m=+0.549169153 container remove 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 21 09:09:10 np0005590528 systemd[1]: libpod-conmon-01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2.scope: Deactivated successfully.
Jan 21 09:09:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:09:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:09:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:09:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:09:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:09:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:09:11 np0005590528 podman[244178]: 2026-01-21 14:09:11.203152279 +0000 UTC m=+0.073232580 container create 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 09:09:11 np0005590528 systemd[1]: Started libpod-conmon-8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8.scope.
Jan 21 09:09:11 np0005590528 podman[244178]: 2026-01-21 14:09:11.175371255 +0000 UTC m=+0.045451606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:09:11 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:09:11 np0005590528 podman[244178]: 2026-01-21 14:09:11.307708139 +0000 UTC m=+0.177788470 container init 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 09:09:11 np0005590528 podman[244178]: 2026-01-21 14:09:11.32013978 +0000 UTC m=+0.190220071 container start 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:09:11 np0005590528 podman[244178]: 2026-01-21 14:09:11.324590921 +0000 UTC m=+0.194671262 container attach 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 09:09:11 np0005590528 optimistic_shtern[244195]: 167 167
Jan 21 09:09:11 np0005590528 systemd[1]: libpod-8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8.scope: Deactivated successfully.
Jan 21 09:09:11 np0005590528 podman[244178]: 2026-01-21 14:09:11.328618801 +0000 UTC m=+0.198699092 container died 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:09:11 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ffefbb312087164ae4b78f19e93a18377011fc77853d0b0f869291ea3143688c-merged.mount: Deactivated successfully.
Jan 21 09:09:11 np0005590528 podman[244178]: 2026-01-21 14:09:11.385540563 +0000 UTC m=+0.255620834 container remove 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 09:09:11 np0005590528 systemd[1]: libpod-conmon-8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8.scope: Deactivated successfully.
Jan 21 09:09:11 np0005590528 podman[244217]: 2026-01-21 14:09:11.64172211 +0000 UTC m=+0.065930017 container create 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 09:09:11 np0005590528 systemd[1]: Started libpod-conmon-91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83.scope.
Jan 21 09:09:11 np0005590528 podman[244217]: 2026-01-21 14:09:11.614752426 +0000 UTC m=+0.038960393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:09:11 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:09:11 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a713d54eac246b6500099e433a7a127a78fa8593bff8f6ceb02739a34c92252/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:11 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a713d54eac246b6500099e433a7a127a78fa8593bff8f6ceb02739a34c92252/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:11 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a713d54eac246b6500099e433a7a127a78fa8593bff8f6ceb02739a34c92252/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:11 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a713d54eac246b6500099e433a7a127a78fa8593bff8f6ceb02739a34c92252/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:09:11 np0005590528 podman[244217]: 2026-01-21 14:09:11.746475836 +0000 UTC m=+0.170683753 container init 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 09:09:11 np0005590528 podman[244217]: 2026-01-21 14:09:11.758541766 +0000 UTC m=+0.182749673 container start 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:09:11 np0005590528 podman[244217]: 2026-01-21 14:09:11.763737146 +0000 UTC m=+0.187945043 container attach 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 09:09:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:12 np0005590528 lvm[244312]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:09:12 np0005590528 lvm[244312]: VG ceph_vg0 finished
Jan 21 09:09:12 np0005590528 lvm[244313]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:09:12 np0005590528 lvm[244313]: VG ceph_vg1 finished
Jan 21 09:09:12 np0005590528 lvm[244315]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:09:12 np0005590528 lvm[244315]: VG ceph_vg2 finished
Jan 21 09:09:12 np0005590528 admiring_bohr[244233]: {}
Jan 21 09:09:12 np0005590528 systemd[1]: libpod-91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83.scope: Deactivated successfully.
Jan 21 09:09:12 np0005590528 systemd[1]: libpod-91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83.scope: Consumed 1.325s CPU time.
Jan 21 09:09:12 np0005590528 podman[244217]: 2026-01-21 14:09:12.565719861 +0000 UTC m=+0.989927778 container died 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 09:09:12 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2a713d54eac246b6500099e433a7a127a78fa8593bff8f6ceb02739a34c92252-merged.mount: Deactivated successfully.
Jan 21 09:09:12 np0005590528 podman[244217]: 2026-01-21 14:09:12.63617818 +0000 UTC m=+1.060386077 container remove 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:09:12 np0005590528 systemd[1]: libpod-conmon-91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83.scope: Deactivated successfully.
Jan 21 09:09:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:09:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:09:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:09:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:09:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:09:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:09:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:09:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/483851383' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:09:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:09:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/483851383' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:09:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:27 np0005590528 podman[244354]: 2026-01-21 14:09:27.357781292 +0000 UTC m=+0.074928002 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:09:27 np0005590528 podman[244353]: 2026-01-21 14:09:27.394459194 +0000 UTC m=+0.115250245 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 09:09:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:09:33.898 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:09:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:09:33.899 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:09:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:09:33.899 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:09:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:38 np0005590528 nova_compute[239261]: 2026-01-21 14:09:38.726 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:09:38 np0005590528 nova_compute[239261]: 2026-01-21 14:09:38.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:09:38 np0005590528 nova_compute[239261]: 2026-01-21 14:09:38.727 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:09:38 np0005590528 nova_compute[239261]: 2026-01-21 14:09:38.875 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:09:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:09:39
Jan 21 09:09:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:09:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:09:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'vms', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 21 09:09:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:09:39 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 09:09:39 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 09:09:39 np0005590528 nova_compute[239261]: 2026-01-21 14:09:39.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:09:39 np0005590528 nova_compute[239261]: 2026-01-21 14:09:39.745 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:09:39 np0005590528 nova_compute[239261]: 2026-01-21 14:09:39.785 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:09:39 np0005590528 nova_compute[239261]: 2026-01-21 14:09:39.785 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:09:39 np0005590528 nova_compute[239261]: 2026-01-21 14:09:39.786 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:09:39 np0005590528 nova_compute[239261]: 2026-01-21 14:09:39.786 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:09:39 np0005590528 nova_compute[239261]: 2026-01-21 14:09:39.786 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:09:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:09:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543998522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:09:40 np0005590528 nova_compute[239261]: 2026-01-21 14:09:40.353 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:09:40 np0005590528 nova_compute[239261]: 2026-01-21 14:09:40.542 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:09:40 np0005590528 nova_compute[239261]: 2026-01-21 14:09:40.543 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5144MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:09:40 np0005590528 nova_compute[239261]: 2026-01-21 14:09:40.543 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:09:40 np0005590528 nova_compute[239261]: 2026-01-21 14:09:40.543 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:09:40 np0005590528 nova_compute[239261]: 2026-01-21 14:09:40.647 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:09:40 np0005590528 nova_compute[239261]: 2026-01-21 14:09:40.647 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:09:40 np0005590528 nova_compute[239261]: 2026-01-21 14:09:40.664 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:09:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:09:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:09:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:09:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3987041817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:09:41 np0005590528 nova_compute[239261]: 2026-01-21 14:09:41.250 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:09:41 np0005590528 nova_compute[239261]: 2026-01-21 14:09:41.259 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:09:41 np0005590528 nova_compute[239261]: 2026-01-21 14:09:41.371 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:09:41 np0005590528 nova_compute[239261]: 2026-01-21 14:09:41.375 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:09:41 np0005590528 nova_compute[239261]: 2026-01-21 14:09:41.376 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:09:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:42 np0005590528 nova_compute[239261]: 2026-01-21 14:09:42.356 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:09:42 np0005590528 nova_compute[239261]: 2026-01-21 14:09:42.357 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:09:42 np0005590528 nova_compute[239261]: 2026-01-21 14:09:42.357 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:09:42 np0005590528 nova_compute[239261]: 2026-01-21 14:09:42.357 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:09:42 np0005590528 nova_compute[239261]: 2026-01-21 14:09:42.357 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:09:42 np0005590528 nova_compute[239261]: 2026-01-21 14:09:42.358 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:09:42 np0005590528 nova_compute[239261]: 2026-01-21 14:09:42.358 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:09:42 np0005590528 nova_compute[239261]: 2026-01-21 14:09:42.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:09:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:09:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 21 09:09:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 21 09:09:48 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 21 09:09:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 21 09:09:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 21 09:09:49 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 895 B/s wr, 3 op/s
Jan 21 09:09:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 21 09:09:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 21 09:09:50 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2771392571877214e-06 of space, bias 4.0, pg target 0.0015325671086252656 quantized to 16 (current 16)
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:09:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:09:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 21 09:09:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 8.5 MiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Jan 21 09:09:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:09:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Jan 21 09:09:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 21 09:09:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 21 09:09:57 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 21 09:09:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.6 MiB/s wr, 20 op/s
Jan 21 09:09:58 np0005590528 podman[244447]: 2026-01-21 14:09:58.370235542 +0000 UTC m=+0.065970620 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 09:09:58 np0005590528 podman[244446]: 2026-01-21 14:09:58.386004324 +0000 UTC m=+0.097005261 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:10:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.2 MiB/s wr, 36 op/s
Jan 21 09:10:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 21 09:10:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 21 09:10:00 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 21 09:10:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.1 MiB/s wr, 28 op/s
Jan 21 09:10:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Jan 21 09:10:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.4 MiB/s wr, 22 op/s
Jan 21 09:10:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.361248) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608361284, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2070, "num_deletes": 251, "total_data_size": 3521512, "memory_usage": 3584480, "flush_reason": "Manual Compaction"}
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608388745, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3455214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16353, "largest_seqno": 18422, "table_properties": {"data_size": 3445733, "index_size": 6039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18706, "raw_average_key_size": 19, "raw_value_size": 3426841, "raw_average_value_size": 3649, "num_data_blocks": 272, "num_entries": 939, "num_filter_entries": 939, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004382, "oldest_key_time": 1769004382, "file_creation_time": 1769004608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 27579 microseconds, and 9316 cpu microseconds.
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.388817) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3455214 bytes OK
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.388847) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.401007) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.401037) EVENT_LOG_v1 {"time_micros": 1769004608401029, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.401064) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3512855, prev total WAL file size 3512855, number of live WAL files 2.
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.402730) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3374KB)], [38(7779KB)]
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608402841, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11421474, "oldest_snapshot_seqno": -1}
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4442 keys, 9612497 bytes, temperature: kUnknown
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608491736, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9612497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9579085, "index_size": 21206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 107399, "raw_average_key_size": 24, "raw_value_size": 9495166, "raw_average_value_size": 2137, "num_data_blocks": 900, "num_entries": 4442, "num_filter_entries": 4442, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.491996) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9612497 bytes
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.496162) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.3 rd, 108.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4960, records dropped: 518 output_compression: NoCompression
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.496207) EVENT_LOG_v1 {"time_micros": 1769004608496191, "job": 18, "event": "compaction_finished", "compaction_time_micros": 89003, "compaction_time_cpu_micros": 25280, "output_level": 6, "num_output_files": 1, "total_output_size": 9612497, "num_input_records": 4960, "num_output_records": 4442, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608497152, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608498976, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.402600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.499090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.499098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.499100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.499103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:08 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.499105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:10:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:10:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:10:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:10:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:10:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:10:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:10:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:10:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:10:14 np0005590528 podman[244636]: 2026-01-21 14:10:14.04178078 +0000 UTC m=+0.046080756 container create 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:10:14 np0005590528 systemd[1]: Started libpod-conmon-35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331.scope.
Jan 21 09:10:14 np0005590528 podman[244636]: 2026-01-21 14:10:14.022719177 +0000 UTC m=+0.027019193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:10:14 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:10:14 np0005590528 podman[244636]: 2026-01-21 14:10:14.138882753 +0000 UTC m=+0.143182769 container init 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 09:10:14 np0005590528 podman[244636]: 2026-01-21 14:10:14.149530518 +0000 UTC m=+0.153830534 container start 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 09:10:14 np0005590528 podman[244636]: 2026-01-21 14:10:14.153891926 +0000 UTC m=+0.158191932 container attach 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:10:14 np0005590528 agitated_darwin[244653]: 167 167
Jan 21 09:10:14 np0005590528 systemd[1]: libpod-35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331.scope: Deactivated successfully.
Jan 21 09:10:14 np0005590528 conmon[244653]: conmon 35f70c8de86f962a3657 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331.scope/container/memory.events
Jan 21 09:10:14 np0005590528 podman[244636]: 2026-01-21 14:10:14.156267815 +0000 UTC m=+0.160567801 container died 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:10:14 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b0616dbc626913fac192180f64b7db3bfa24cdddbae3b28b2cf60ba3494b67a0-merged.mount: Deactivated successfully.
Jan 21 09:10:14 np0005590528 podman[244636]: 2026-01-21 14:10:14.199223812 +0000 UTC m=+0.203523798 container remove 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 09:10:14 np0005590528 systemd[1]: libpod-conmon-35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331.scope: Deactivated successfully.
Jan 21 09:10:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:10:14 np0005590528 podman[244677]: 2026-01-21 14:10:14.365257838 +0000 UTC m=+0.040379894 container create 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:10:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:10:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:10:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:10:14 np0005590528 systemd[1]: Started libpod-conmon-38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78.scope.
Jan 21 09:10:14 np0005590528 podman[244677]: 2026-01-21 14:10:14.346308377 +0000 UTC m=+0.021430463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:10:14 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:10:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:14 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:14 np0005590528 podman[244677]: 2026-01-21 14:10:14.463192341 +0000 UTC m=+0.138314487 container init 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:10:14 np0005590528 podman[244677]: 2026-01-21 14:10:14.472631176 +0000 UTC m=+0.147753232 container start 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:10:14 np0005590528 podman[244677]: 2026-01-21 14:10:14.476631225 +0000 UTC m=+0.151753281 container attach 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 09:10:14 np0005590528 angry_ride[244693]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:10:14 np0005590528 angry_ride[244693]: --> All data devices are unavailable
Jan 21 09:10:14 np0005590528 systemd[1]: libpod-38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78.scope: Deactivated successfully.
Jan 21 09:10:14 np0005590528 podman[244677]: 2026-01-21 14:10:14.994194695 +0000 UTC m=+0.669316771 container died 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:10:15 np0005590528 systemd[1]: var-lib-containers-storage-overlay-94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917-merged.mount: Deactivated successfully.
Jan 21 09:10:15 np0005590528 podman[244677]: 2026-01-21 14:10:15.044108216 +0000 UTC m=+0.719230302 container remove 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:10:15 np0005590528 systemd[1]: libpod-conmon-38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78.scope: Deactivated successfully.
Jan 21 09:10:15 np0005590528 podman[244783]: 2026-01-21 14:10:15.540598451 +0000 UTC m=+0.037604645 container create d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 09:10:15 np0005590528 systemd[1]: Started libpod-conmon-d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d.scope.
Jan 21 09:10:15 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:10:15 np0005590528 podman[244783]: 2026-01-21 14:10:15.618654701 +0000 UTC m=+0.115660905 container init d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 09:10:15 np0005590528 podman[244783]: 2026-01-21 14:10:15.523787514 +0000 UTC m=+0.020793728 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:10:15 np0005590528 podman[244783]: 2026-01-21 14:10:15.625004129 +0000 UTC m=+0.122010313 container start d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:10:15 np0005590528 podman[244783]: 2026-01-21 14:10:15.630600818 +0000 UTC m=+0.127607022 container attach d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 09:10:15 np0005590528 friendly_pascal[244799]: 167 167
Jan 21 09:10:15 np0005590528 systemd[1]: libpod-d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d.scope: Deactivated successfully.
Jan 21 09:10:15 np0005590528 podman[244783]: 2026-01-21 14:10:15.632888025 +0000 UTC m=+0.129894229 container died d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 09:10:15 np0005590528 systemd[1]: var-lib-containers-storage-overlay-72b8cc38e309d85cec04e63b1b6fb30ec249d7d07be05510c5778239ac5c2658-merged.mount: Deactivated successfully.
Jan 21 09:10:15 np0005590528 podman[244783]: 2026-01-21 14:10:15.675168005 +0000 UTC m=+0.172174229 container remove d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:10:15 np0005590528 systemd[1]: libpod-conmon-d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d.scope: Deactivated successfully.
Jan 21 09:10:15 np0005590528 podman[244823]: 2026-01-21 14:10:15.835310605 +0000 UTC m=+0.029161976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:10:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:10:16 np0005590528 podman[244823]: 2026-01-21 14:10:16.284142507 +0000 UTC m=+0.477993858 container create a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 09:10:16 np0005590528 systemd[1]: Started libpod-conmon-a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc.scope.
Jan 21 09:10:16 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:10:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6704a7736c1b5c692e5bdbc6df088ff32d346c4a76941209b3b5a917e13c2014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6704a7736c1b5c692e5bdbc6df088ff32d346c4a76941209b3b5a917e13c2014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6704a7736c1b5c692e5bdbc6df088ff32d346c4a76941209b3b5a917e13c2014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6704a7736c1b5c692e5bdbc6df088ff32d346c4a76941209b3b5a917e13c2014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:16 np0005590528 podman[244823]: 2026-01-21 14:10:16.39091066 +0000 UTC m=+0.584762011 container init a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 09:10:16 np0005590528 podman[244823]: 2026-01-21 14:10:16.3997659 +0000 UTC m=+0.593617261 container start a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 09:10:16 np0005590528 podman[244823]: 2026-01-21 14:10:16.40380573 +0000 UTC m=+0.597657101 container attach a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]: {
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:    "0": [
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:        {
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "devices": [
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "/dev/loop3"
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            ],
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_name": "ceph_lv0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_size": "21470642176",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "name": "ceph_lv0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "tags": {
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.cluster_name": "ceph",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.crush_device_class": "",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.encrypted": "0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.objectstore": "bluestore",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.osd_id": "0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.type": "block",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.vdo": "0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.with_tpm": "0"
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            },
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "type": "block",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "vg_name": "ceph_vg0"
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:        }
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:    ],
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:    "1": [
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:        {
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "devices": [
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "/dev/loop4"
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            ],
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_name": "ceph_lv1",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_size": "21470642176",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "name": "ceph_lv1",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "tags": {
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.cluster_name": "ceph",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.crush_device_class": "",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.encrypted": "0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.objectstore": "bluestore",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.osd_id": "1",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.type": "block",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.vdo": "0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.with_tpm": "0"
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            },
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "type": "block",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "vg_name": "ceph_vg1"
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:        }
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:    ],
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:    "2": [
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:        {
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "devices": [
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "/dev/loop5"
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            ],
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_name": "ceph_lv2",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_size": "21470642176",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "name": "ceph_lv2",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "tags": {
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.cluster_name": "ceph",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.crush_device_class": "",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.encrypted": "0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.objectstore": "bluestore",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.osd_id": "2",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.type": "block",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.vdo": "0",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:                "ceph.with_tpm": "0"
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            },
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "type": "block",
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:            "vg_name": "ceph_vg2"
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:        }
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]:    ]
Jan 21 09:10:16 np0005590528 determined_meninsky[244840]: }
Jan 21 09:10:16 np0005590528 systemd[1]: libpod-a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc.scope: Deactivated successfully.
Jan 21 09:10:16 np0005590528 podman[244823]: 2026-01-21 14:10:16.723659018 +0000 UTC m=+0.917510359 container died a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 09:10:16 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6704a7736c1b5c692e5bdbc6df088ff32d346c4a76941209b3b5a917e13c2014-merged.mount: Deactivated successfully.
Jan 21 09:10:16 np0005590528 podman[244823]: 2026-01-21 14:10:16.768286517 +0000 UTC m=+0.962137868 container remove a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 09:10:16 np0005590528 systemd[1]: libpod-conmon-a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc.scope: Deactivated successfully.
Jan 21 09:10:17 np0005590528 podman[244923]: 2026-01-21 14:10:17.310797017 +0000 UTC m=+0.098537210 container create d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:10:17 np0005590528 podman[244923]: 2026-01-21 14:10:17.239879425 +0000 UTC m=+0.027619598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:10:17 np0005590528 systemd[1]: Started libpod-conmon-d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a.scope.
Jan 21 09:10:17 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:10:17 np0005590528 podman[244923]: 2026-01-21 14:10:17.411338965 +0000 UTC m=+0.199079138 container init d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 09:10:17 np0005590528 podman[244923]: 2026-01-21 14:10:17.418843092 +0000 UTC m=+0.206583275 container start d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:10:17 np0005590528 podman[244923]: 2026-01-21 14:10:17.423676281 +0000 UTC m=+0.211416464 container attach d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:10:17 np0005590528 systemd[1]: libpod-d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a.scope: Deactivated successfully.
Jan 21 09:10:17 np0005590528 exciting_swartz[244939]: 167 167
Jan 21 09:10:17 np0005590528 conmon[244939]: conmon d7d1d17acaa9fc725b86 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a.scope/container/memory.events
Jan 21 09:10:17 np0005590528 podman[244923]: 2026-01-21 14:10:17.428496132 +0000 UTC m=+0.216236285 container died d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:10:17 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2ea5bd8f2fb6a9ddfbbb697a061061482fd9c193b6809b4a05a97fadad39a83e-merged.mount: Deactivated successfully.
Jan 21 09:10:17 np0005590528 podman[244923]: 2026-01-21 14:10:17.470722861 +0000 UTC m=+0.258463014 container remove d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 09:10:17 np0005590528 systemd[1]: libpod-conmon-d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a.scope: Deactivated successfully.
Jan 21 09:10:17 np0005590528 podman[244961]: 2026-01-21 14:10:17.663059849 +0000 UTC m=+0.044237379 container create a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:10:17 np0005590528 systemd[1]: Started libpod-conmon-a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61.scope.
Jan 21 09:10:17 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:10:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bde2b9a76a57d956be36d55107cb3aacafd799cff3d973f653180bacc106331/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bde2b9a76a57d956be36d55107cb3aacafd799cff3d973f653180bacc106331/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bde2b9a76a57d956be36d55107cb3aacafd799cff3d973f653180bacc106331/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:17 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bde2b9a76a57d956be36d55107cb3aacafd799cff3d973f653180bacc106331/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:10:17 np0005590528 podman[244961]: 2026-01-21 14:10:17.643178306 +0000 UTC m=+0.024355886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:10:17 np0005590528 podman[244961]: 2026-01-21 14:10:17.749878857 +0000 UTC m=+0.131056467 container init a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 09:10:17 np0005590528 podman[244961]: 2026-01-21 14:10:17.762050619 +0000 UTC m=+0.143228169 container start a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 09:10:17 np0005590528 podman[244961]: 2026-01-21 14:10:17.766370987 +0000 UTC m=+0.147548557 container attach a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:10:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:10:18 np0005590528 lvm[245056]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:10:18 np0005590528 lvm[245056]: VG ceph_vg0 finished
Jan 21 09:10:18 np0005590528 lvm[245057]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:10:18 np0005590528 lvm[245057]: VG ceph_vg1 finished
Jan 21 09:10:18 np0005590528 lvm[245059]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:10:18 np0005590528 lvm[245059]: VG ceph_vg2 finished
Jan 21 09:10:18 np0005590528 epic_zhukovsky[244978]: {}
Jan 21 09:10:18 np0005590528 systemd[1]: libpod-a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61.scope: Deactivated successfully.
Jan 21 09:10:18 np0005590528 systemd[1]: libpod-a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61.scope: Consumed 1.212s CPU time.
Jan 21 09:10:18 np0005590528 podman[244961]: 2026-01-21 14:10:18.523985031 +0000 UTC m=+0.905162611 container died a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 09:10:18 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2bde2b9a76a57d956be36d55107cb3aacafd799cff3d973f653180bacc106331-merged.mount: Deactivated successfully.
Jan 21 09:10:18 np0005590528 podman[244961]: 2026-01-21 14:10:18.655670513 +0000 UTC m=+1.036848033 container remove a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 09:10:18 np0005590528 systemd[1]: libpod-conmon-a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61.scope: Deactivated successfully.
Jan 21 09:10:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:10:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:10:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:10:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:10:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:10:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:10:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:10:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:10:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:10:21 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:21 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:21.541+0000 7fc516655640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:10:22 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/d2577f41-d908-4371-8c43-e8fbe046d39f'.
Jan 21 09:10:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:10:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:10:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:10:22 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "format": "json"}]: dispatch
Jan 21 09:10:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:10:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:10:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:10:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:10:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:10:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2631761722' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:10:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:10:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2631761722' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:10:23 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.tnwklj(active, since 25m)
Jan 21 09:10:23 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:10:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 09:10:23 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f1e76c5b-dd9f-45f4-b2d2-e22465776219/a6452fa6-7ff6-41a5-b0cb-e0c7da2f4521'.
Jan 21 09:10:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f1e76c5b-dd9f-45f4-b2d2-e22465776219/.meta.tmp'
Jan 21 09:10:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f1e76c5b-dd9f-45f4-b2d2-e22465776219/.meta.tmp' to config b'/volumes/_nogroup/f1e76c5b-dd9f-45f4-b2d2-e22465776219/.meta'
Jan 21 09:10:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 09:10:23 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "format": "json"}]: dispatch
Jan 21 09:10:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 09:10:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 09:10:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:10:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:10:24 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:10:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 09:10:24 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ec4e87bc-026b-4a6f-938e-c32b3b1010de/205f3a51-88be-4bba-be8f-7be277cabc08'.
Jan 21 09:10:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ec4e87bc-026b-4a6f-938e-c32b3b1010de/.meta.tmp'
Jan 21 09:10:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ec4e87bc-026b-4a6f-938e-c32b3b1010de/.meta.tmp' to config b'/volumes/_nogroup/ec4e87bc-026b-4a6f-938e-c32b3b1010de/.meta'
Jan 21 09:10:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 09:10:24 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "format": "json"}]: dispatch
Jan 21 09:10:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 09:10:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 09:10:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:10:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:10:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ca297459-dcdc-48cc-b973-0a2fd8a93409/9de7cc8f-afb8-49c1-8ccf-2bc90c8f924e'.
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ca297459-dcdc-48cc-b973-0a2fd8a93409/.meta.tmp'
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ca297459-dcdc-48cc-b973-0a2fd8a93409/.meta.tmp' to config b'/volumes/_nogroup/ca297459-dcdc-48cc-b973-0a2fd8a93409/.meta'
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "format": "json"}]: dispatch
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 09:10:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:10:25 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 21 09:10:25 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 09:10:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:26 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:10:26.045 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:10:26 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:10:26.046 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:10:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 09:10:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s wr, 2 op/s
Jan 21 09:10:28 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:10:28.047 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s wr, 2 op/s
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "format": "json"}]: dispatch
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ca297459-dcdc-48cc-b973-0a2fd8a93409' of type subvolume
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.599+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ca297459-dcdc-48cc-b973-0a2fd8a93409' of type subvolume
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "force": true, "format": "json"}]: dispatch
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ca297459-dcdc-48cc-b973-0a2fd8a93409'' moved to trashcan
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.618+0000 7fc51965b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.618+0000 7fc51965b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.618+0000 7fc51965b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.618+0000 7fc51965b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.618+0000 7fc51965b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.660+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.660+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.660+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.660+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.660+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:10:29 np0005590528 podman[245141]: 2026-01-21 14:10:29.360211543 +0000 UTC m=+0.078059629 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 21 09:10:29 np0005590528 podman[245140]: 2026-01-21 14:10:29.398039823 +0000 UTC m=+0.119314725 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 09:10:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 3 op/s
Jan 21 09:10:30 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.tnwklj(active, since 25m)
Jan 21 09:10:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:31 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "format": "json"}]: dispatch
Jan 21 09:10:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:31 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec4e87bc-026b-4a6f-938e-c32b3b1010de' of type subvolume
Jan 21 09:10:31 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:31.467+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec4e87bc-026b-4a6f-938e-c32b3b1010de' of type subvolume
Jan 21 09:10:31 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "force": true, "format": "json"}]: dispatch
Jan 21 09:10:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 09:10:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ec4e87bc-026b-4a6f-938e-c32b3b1010de'' moved to trashcan
Jan 21 09:10:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:10:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 09:10:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 3 op/s
Jan 21 09:10:32 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:10:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 09:10:32 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0a16a328-6a6b-4997-8d01-233d8aaecf94/ca18ae93-6039-44b0-aed6-bffe7b551018'.
Jan 21 09:10:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0a16a328-6a6b-4997-8d01-233d8aaecf94/.meta.tmp'
Jan 21 09:10:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0a16a328-6a6b-4997-8d01-233d8aaecf94/.meta.tmp' to config b'/volumes/_nogroup/0a16a328-6a6b-4997-8d01-233d8aaecf94/.meta'
Jan 21 09:10:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 09:10:32 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "format": "json"}]: dispatch
Jan 21 09:10:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 09:10:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 09:10:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:10:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:10:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "format": "json"}]: dispatch
Jan 21 09:10:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:33 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f1e76c5b-dd9f-45f4-b2d2-e22465776219' of type subvolume
Jan 21 09:10:33 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:33.014+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f1e76c5b-dd9f-45f4-b2d2-e22465776219' of type subvolume
Jan 21 09:10:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "force": true, "format": "json"}]: dispatch
Jan 21 09:10:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 09:10:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f1e76c5b-dd9f-45f4-b2d2-e22465776219'' moved to trashcan
Jan 21 09:10:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:10:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 09:10:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:10:33.900 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:10:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:10:33.900 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:10:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:10:33.900 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:10:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 3 op/s
Jan 21 09:10:35 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 20 KiB/s wr, 6 op/s
Jan 21 09:10:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 21 09:10:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 09:10:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 09:10:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 5 op/s
Jan 21 09:10:38 np0005590528 nova_compute[239261]: 2026-01-21 14:10:38.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:10:38 np0005590528 nova_compute[239261]: 2026-01-21 14:10:38.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:10:38 np0005590528 nova_compute[239261]: 2026-01-21 14:10:38.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:10:38 np0005590528 nova_compute[239261]: 2026-01-21 14:10:38.780 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "format": "json"}]: dispatch
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0a16a328-6a6b-4997-8d01-233d8aaecf94' of type subvolume
Jan 21 09:10:39 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:39.068+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0a16a328-6a6b-4997-8d01-233d8aaecf94' of type subvolume
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "force": true, "format": "json"}]: dispatch
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0a16a328-6a6b-4997-8d01-233d8aaecf94'' moved to trashcan
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:10:39
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.meta']
Jan 21 09:10:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:10:39 np0005590528 nova_compute[239261]: 2026-01-21 14:10:39.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:10:39 np0005590528 nova_compute[239261]: 2026-01-21 14:10:39.754 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:10:39 np0005590528 nova_compute[239261]: 2026-01-21 14:10:39.755 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:10:39 np0005590528 nova_compute[239261]: 2026-01-21 14:10:39.755 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:10:39 np0005590528 nova_compute[239261]: 2026-01-21 14:10:39.755 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:10:39 np0005590528 nova_compute[239261]: 2026-01-21 14:10:39.755 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:10:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 16 KiB/s wr, 6 op/s
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3204784224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:10:40 np0005590528 nova_compute[239261]: 2026-01-21 14:10:40.300 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:10:40 np0005590528 nova_compute[239261]: 2026-01-21 14:10:40.477 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:10:40 np0005590528 nova_compute[239261]: 2026-01-21 14:10:40.478 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5162MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:10:40 np0005590528 nova_compute[239261]: 2026-01-21 14:10:40.479 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:10:40 np0005590528 nova_compute[239261]: 2026-01-21 14:10:40.479 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:10:40 np0005590528 nova_compute[239261]: 2026-01-21 14:10:40.543 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:10:40 np0005590528 nova_compute[239261]: 2026-01-21 14:10:40.543 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:10:40 np0005590528 nova_compute[239261]: 2026-01-21 14:10:40.564 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.968075) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004640968101, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 585, "num_deletes": 251, "total_data_size": 740232, "memory_usage": 750272, "flush_reason": "Manual Compaction"}
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004640974286, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 614936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18423, "largest_seqno": 19007, "table_properties": {"data_size": 611920, "index_size": 924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8035, "raw_average_key_size": 20, "raw_value_size": 605592, "raw_average_value_size": 1529, "num_data_blocks": 41, "num_entries": 396, "num_filter_entries": 396, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004609, "oldest_key_time": 1769004609, "file_creation_time": 1769004640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 6285 microseconds, and 2918 cpu microseconds.
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.974351) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 614936 bytes OK
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.974378) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.975911) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.975934) EVENT_LOG_v1 {"time_micros": 1769004640975927, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.975955) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 736918, prev total WAL file size 736918, number of live WAL files 2.
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.976503) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(600KB)], [41(9387KB)]
Jan 21 09:10:40 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004640976539, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10227433, "oldest_snapshot_seqno": -1}
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4330 keys, 6992315 bytes, temperature: kUnknown
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004641032799, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6992315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6963778, "index_size": 16587, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 105599, "raw_average_key_size": 24, "raw_value_size": 6885917, "raw_average_value_size": 1590, "num_data_blocks": 698, "num_entries": 4330, "num_filter_entries": 4330, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.033052) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6992315 bytes
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.036020) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.5 rd, 124.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.2 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(28.0) write-amplify(11.4) OK, records in: 4838, records dropped: 508 output_compression: NoCompression
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.036054) EVENT_LOG_v1 {"time_micros": 1769004641036037, "job": 20, "event": "compaction_finished", "compaction_time_micros": 56345, "compaction_time_cpu_micros": 20322, "output_level": 6, "num_output_files": 1, "total_output_size": 6992315, "num_input_records": 4838, "num_output_records": 4330, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004641036317, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004641038281, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.976404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.038474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.038482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.038484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.038487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.038489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:10:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187461756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:10:41 np0005590528 nova_compute[239261]: 2026-01-21 14:10:41.091 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:10:41 np0005590528 nova_compute[239261]: 2026-01-21 14:10:41.096 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:10:41 np0005590528 nova_compute[239261]: 2026-01-21 14:10:41.124 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:10:41 np0005590528 nova_compute[239261]: 2026-01-21 14:10:41.125 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:10:41 np0005590528 nova_compute[239261]: 2026-01-21 14:10:41.125 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:10:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:10:42 np0005590528 nova_compute[239261]: 2026-01-21 14:10:42.126 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:10:42 np0005590528 nova_compute[239261]: 2026-01-21 14:10:42.127 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:10:42 np0005590528 nova_compute[239261]: 2026-01-21 14:10:42.127 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:10:42 np0005590528 nova_compute[239261]: 2026-01-21 14:10:42.127 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:10:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 10 KiB/s wr, 4 op/s
Jan 21 09:10:42 np0005590528 nova_compute[239261]: 2026-01-21 14:10:42.719 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:10:42 np0005590528 nova_compute[239261]: 2026-01-21 14:10:42.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:10:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:10:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 09:10:43 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/09f7a444-a5d6-4cd1-8195-bcb6a300bcd5/8b4e0f9c-cd5e-4a1d-b5b4-0c646ea195b3'.
Jan 21 09:10:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09f7a444-a5d6-4cd1-8195-bcb6a300bcd5/.meta.tmp'
Jan 21 09:10:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09f7a444-a5d6-4cd1-8195-bcb6a300bcd5/.meta.tmp' to config b'/volumes/_nogroup/09f7a444-a5d6-4cd1-8195-bcb6a300bcd5/.meta'
Jan 21 09:10:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 09:10:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "format": "json"}]: dispatch
Jan 21 09:10:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 09:10:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 09:10:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:10:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:10:43 np0005590528 nova_compute[239261]: 2026-01-21 14:10:43.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:10:43 np0005590528 nova_compute[239261]: 2026-01-21 14:10:43.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:10:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 10 KiB/s wr, 5 op/s
Jan 21 09:10:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "format": "json"}]: dispatch
Jan 21 09:10:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:45 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09f7a444-a5d6-4cd1-8195-bcb6a300bcd5' of type subvolume
Jan 21 09:10:45 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:45.813+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09f7a444-a5d6-4cd1-8195-bcb6a300bcd5' of type subvolume
Jan 21 09:10:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "force": true, "format": "json"}]: dispatch
Jan 21 09:10:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 09:10:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/09f7a444-a5d6-4cd1-8195-bcb6a300bcd5'' moved to trashcan
Jan 21 09:10:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:10:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 09:10:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 14 KiB/s wr, 7 op/s
Jan 21 09:10:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 6.4 KiB/s wr, 3 op/s
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 5 op/s
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662231874301377 of space, bias 1.0, pg target 0.1998669562290413 quantized to 32 (current 32)
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.852332329108876e-06 of space, bias 4.0, pg target 0.007022798794930651 quantized to 16 (current 16)
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:10:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:10:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 6.7 KiB/s wr, 4 op/s
Jan 21 09:10:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 6.7 KiB/s wr, 4 op/s
Jan 21 09:10:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:10:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 09:10:55 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d765fd4c-f99f-46af-bd07-596dac7c37d5/801344bb-1db0-4dbb-90a5-ccedbd38215f'.
Jan 21 09:10:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d765fd4c-f99f-46af-bd07-596dac7c37d5/.meta.tmp'
Jan 21 09:10:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d765fd4c-f99f-46af-bd07-596dac7c37d5/.meta.tmp' to config b'/volumes/_nogroup/d765fd4c-f99f-46af-bd07-596dac7c37d5/.meta'
Jan 21 09:10:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 09:10:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "format": "json"}]: dispatch
Jan 21 09:10:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 09:10:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 09:10:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:10:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:10:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:10:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.6 KiB/s wr, 4 op/s
Jan 21 09:10:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 4.0 KiB/s wr, 2 op/s
Jan 21 09:10:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "format": "json"}]: dispatch
Jan 21 09:10:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:10:59 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd765fd4c-f99f-46af-bd07-596dac7c37d5' of type subvolume
Jan 21 09:10:59 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:59.793+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd765fd4c-f99f-46af-bd07-596dac7c37d5' of type subvolume
Jan 21 09:10:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "force": true, "format": "json"}]: dispatch
Jan 21 09:10:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 09:10:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d765fd4c-f99f-46af-bd07-596dac7c37d5'' moved to trashcan
Jan 21 09:10:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:10:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 09:11:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 6.5 KiB/s wr, 3 op/s
Jan 21 09:11:00 np0005590528 podman[245231]: 2026-01-21 14:11:00.345422885 +0000 UTC m=+0.054000454 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 21 09:11:00 np0005590528 podman[245230]: 2026-01-21 14:11:00.367810121 +0000 UTC m=+0.088278336 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:11:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 1 op/s
Jan 21 09:11:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:11:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 09:11:03 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/18603bd9-4e2c-4abb-ab1b-01752b8839c2/deefac8d-d835-46fb-b96e-2a3f5c2af6a5'.
Jan 21 09:11:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18603bd9-4e2c-4abb-ab1b-01752b8839c2/.meta.tmp'
Jan 21 09:11:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18603bd9-4e2c-4abb-ab1b-01752b8839c2/.meta.tmp' to config b'/volumes/_nogroup/18603bd9-4e2c-4abb-ab1b-01752b8839c2/.meta'
Jan 21 09:11:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 09:11:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "format": "json"}]: dispatch
Jan 21 09:11:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 09:11:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 09:11:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:11:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:11:04 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "079a2c23-2c42-4cc1-a9d6-b5424fcac054", "format": "json"}]: dispatch
Jan 21 09:11:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 1 op/s
Jan 21 09:11:05 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "079a2c23-2c42-4cc1-a9d6-b5424fcac054_2fe464ee-20eb-425c-b4bb-d3f446c877cd", "force": true, "format": "json"}]: dispatch
Jan 21 09:11:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054_2fe464ee-20eb-425c-b4bb-d3f446c877cd, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:11:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:11:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054_2fe464ee-20eb-425c-b4bb-d3f446c877cd, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:05 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "079a2c23-2c42-4cc1-a9d6-b5424fcac054", "force": true, "format": "json"}]: dispatch
Jan 21 09:11:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:11:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:11:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 9.5 KiB/s wr, 4 op/s
Jan 21 09:11:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:11:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 09:11:07 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/73408dc6-4c0b-4079-a270-af31e9a2608f/0db648a5-66e9-45ca-ac1f-bc80d2193114'.
Jan 21 09:11:07 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 09:11:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/73408dc6-4c0b-4079-a270-af31e9a2608f/.meta.tmp'
Jan 21 09:11:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/73408dc6-4c0b-4079-a270-af31e9a2608f/.meta.tmp' to config b'/volumes/_nogroup/73408dc6-4c0b-4079-a270-af31e9a2608f/.meta'
Jan 21 09:11:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 09:11:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "format": "json"}]: dispatch
Jan 21 09:11:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 09:11:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 09:11:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:11:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:11:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.5 KiB/s wr, 3 op/s
Jan 21 09:11:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 6 op/s
Jan 21 09:11:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:11:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:11:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:11:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:11:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:11:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:11:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 13 KiB/s wr, 5 op/s
Jan 21 09:11:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 21 09:11:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 21 09:11:12 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 21 09:11:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "format": "json"}]: dispatch
Jan 21 09:11:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:11:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:11:13 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18603bd9-4e2c-4abb-ab1b-01752b8839c2' of type subvolume
Jan 21 09:11:13 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:11:13.261+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18603bd9-4e2c-4abb-ab1b-01752b8839c2' of type subvolume
Jan 21 09:11:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "force": true, "format": "json"}]: dispatch
Jan 21 09:11:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 09:11:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/18603bd9-4e2c-4abb-ab1b-01752b8839c2'' moved to trashcan
Jan 21 09:11:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:11:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 09:11:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 15 KiB/s wr, 6 op/s
Jan 21 09:11:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 15 KiB/s wr, 4 op/s
Jan 21 09:11:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "format": "json"}]: dispatch
Jan 21 09:11:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:73408dc6-4c0b-4079-a270-af31e9a2608f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:11:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:73408dc6-4c0b-4079-a270-af31e9a2608f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:11:16 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73408dc6-4c0b-4079-a270-af31e9a2608f' of type subvolume
Jan 21 09:11:16 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:11:16.803+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73408dc6-4c0b-4079-a270-af31e9a2608f' of type subvolume
Jan 21 09:11:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "force": true, "format": "json"}]: dispatch
Jan 21 09:11:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 09:11:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/73408dc6-4c0b-4079-a270-af31e9a2608f'' moved to trashcan
Jan 21 09:11:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:11:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 09:11:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 15 KiB/s wr, 4 op/s
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:11:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:11:20 np0005590528 podman[245420]: 2026-01-21 14:11:20.032216759 +0000 UTC m=+0.056367171 container create b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:11:20 np0005590528 systemd[1]: Started libpod-conmon-b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6.scope.
Jan 21 09:11:20 np0005590528 podman[245420]: 2026-01-21 14:11:20.012197772 +0000 UTC m=+0.036348194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:11:20 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:11:20 np0005590528 podman[245420]: 2026-01-21 14:11:20.125154718 +0000 UTC m=+0.149305140 container init b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 09:11:20 np0005590528 podman[245420]: 2026-01-21 14:11:20.13287383 +0000 UTC m=+0.157024232 container start b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:11:20 np0005590528 podman[245420]: 2026-01-21 14:11:20.13646356 +0000 UTC m=+0.160613992 container attach b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 09:11:20 np0005590528 systemd[1]: libpod-b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6.scope: Deactivated successfully.
Jan 21 09:11:20 np0005590528 funny_swanson[245436]: 167 167
Jan 21 09:11:20 np0005590528 conmon[245436]: conmon b9c15494e112e2cac944 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6.scope/container/memory.events
Jan 21 09:11:20 np0005590528 podman[245420]: 2026-01-21 14:11:20.140407187 +0000 UTC m=+0.164557589 container died b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 09:11:20 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6af6ea8dbb83ba104b7972947f74aa9133e8709acbae1470ceabd98fd5a36871-merged.mount: Deactivated successfully.
Jan 21 09:11:20 np0005590528 podman[245420]: 2026-01-21 14:11:20.188084751 +0000 UTC m=+0.212235163 container remove b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:11:20 np0005590528 systemd[1]: libpod-conmon-b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6.scope: Deactivated successfully.
Jan 21 09:11:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 09:11:20 np0005590528 podman[245460]: 2026-01-21 14:11:20.36468245 +0000 UTC m=+0.048275431 container create 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:11:20 np0005590528 systemd[1]: Started libpod-conmon-3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e.scope.
Jan 21 09:11:20 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:11:20 np0005590528 podman[245460]: 2026-01-21 14:11:20.343871183 +0000 UTC m=+0.027464184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:11:20 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:20 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:20 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:20 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:20 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:20 np0005590528 podman[245460]: 2026-01-21 14:11:20.468377187 +0000 UTC m=+0.151970198 container init 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:11:20 np0005590528 podman[245460]: 2026-01-21 14:11:20.479358759 +0000 UTC m=+0.162951750 container start 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 21 09:11:20 np0005590528 podman[245460]: 2026-01-21 14:11:20.483380939 +0000 UTC m=+0.166973930 container attach 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 09:11:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:11:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:11:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:11:20 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "c5e71a6b-b6f4-4c59-b979-36f333691be0", "format": "json"}]: dispatch
Jan 21 09:11:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:21 np0005590528 amazing_pike[245476]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:11:21 np0005590528 amazing_pike[245476]: --> All data devices are unavailable
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 21 09:11:21 np0005590528 systemd[1]: libpod-3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e.scope: Deactivated successfully.
Jan 21 09:11:21 np0005590528 podman[245460]: 2026-01-21 14:11:21.050897631 +0000 UTC m=+0.734490612 container died 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.378352) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681378382, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 664, "num_deletes": 257, "total_data_size": 676140, "memory_usage": 689816, "flush_reason": "Manual Compaction"}
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681386839, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 669361, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19008, "largest_seqno": 19671, "table_properties": {"data_size": 665964, "index_size": 1241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7969, "raw_average_key_size": 18, "raw_value_size": 658812, "raw_average_value_size": 1521, "num_data_blocks": 57, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004641, "oldest_key_time": 1769004641, "file_creation_time": 1769004681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8560 microseconds, and 3583 cpu microseconds.
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.386900) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 669361 bytes OK
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.386936) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.389876) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.389894) EVENT_LOG_v1 {"time_micros": 1769004681389889, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.389911) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 672555, prev total WAL file size 672555, number of live WAL files 2.
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.390307) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(653KB)], [44(6828KB)]
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681390381, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7661676, "oldest_snapshot_seqno": -1}
Jan 21 09:11:21 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84-merged.mount: Deactivated successfully.
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4236 keys, 7542305 bytes, temperature: kUnknown
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681474266, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7542305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7513289, "index_size": 17322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 104928, "raw_average_key_size": 24, "raw_value_size": 7435934, "raw_average_value_size": 1755, "num_data_blocks": 726, "num_entries": 4236, "num_filter_entries": 4236, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.475117) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7542305 bytes
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.476654) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.6 rd, 89.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 6.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(22.7) write-amplify(11.3) OK, records in: 4763, records dropped: 527 output_compression: NoCompression
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.476684) EVENT_LOG_v1 {"time_micros": 1769004681476668, "job": 22, "event": "compaction_finished", "compaction_time_micros": 84567, "compaction_time_cpu_micros": 30170, "output_level": 6, "num_output_files": 1, "total_output_size": 7542305, "num_input_records": 4763, "num_output_records": 4236, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681477194, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681479092, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.390215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.479286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.479299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.479305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.479309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.479314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:11:21 np0005590528 podman[245460]: 2026-01-21 14:11:21.520787886 +0000 UTC m=+1.204380897 container remove 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:11:21 np0005590528 systemd[1]: libpod-conmon-3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e.scope: Deactivated successfully.
Jan 21 09:11:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:11:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 09:11:21 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/be026c8c-9a77-4436-9eb0-bd80e75652ed/c17a5ed2-c845-4ea1-bc03-5533f6ecbf92'.
Jan 21 09:11:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/be026c8c-9a77-4436-9eb0-bd80e75652ed/.meta.tmp'
Jan 21 09:11:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/be026c8c-9a77-4436-9eb0-bd80e75652ed/.meta.tmp' to config b'/volumes/_nogroup/be026c8c-9a77-4436-9eb0-bd80e75652ed/.meta'
Jan 21 09:11:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 09:11:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "format": "json"}]: dispatch
Jan 21 09:11:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 09:11:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:11:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:11:22 np0005590528 podman[245570]: 2026-01-21 14:11:22.062383533 +0000 UTC m=+0.063119559 container create 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:11:22 np0005590528 systemd[1]: Started libpod-conmon-3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5.scope.
Jan 21 09:11:22 np0005590528 podman[245570]: 2026-01-21 14:11:22.033752282 +0000 UTC m=+0.034488388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:11:22 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:11:22 np0005590528 podman[245570]: 2026-01-21 14:11:22.148511754 +0000 UTC m=+0.149247820 container init 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:11:22 np0005590528 podman[245570]: 2026-01-21 14:11:22.160352897 +0000 UTC m=+0.161088943 container start 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:11:22 np0005590528 podman[245570]: 2026-01-21 14:11:22.164363807 +0000 UTC m=+0.165099863 container attach 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 09:11:22 np0005590528 determined_fermi[245586]: 167 167
Jan 21 09:11:22 np0005590528 systemd[1]: libpod-3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5.scope: Deactivated successfully.
Jan 21 09:11:22 np0005590528 podman[245570]: 2026-01-21 14:11:22.168948441 +0000 UTC m=+0.169684497 container died 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 09:11:22 np0005590528 systemd[1]: var-lib-containers-storage-overlay-4a881eff8c77ea85eaf46606867073bf1b0c3c6fcb9bdea410e454e0db06db7f-merged.mount: Deactivated successfully.
Jan 21 09:11:22 np0005590528 podman[245570]: 2026-01-21 14:11:22.214973795 +0000 UTC m=+0.215709831 container remove 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:11:22 np0005590528 systemd[1]: libpod-conmon-3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5.scope: Deactivated successfully.
Jan 21 09:11:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 416 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 09:11:22 np0005590528 podman[245610]: 2026-01-21 14:11:22.409662912 +0000 UTC m=+0.050567667 container create 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 09:11:22 np0005590528 systemd[1]: Started libpod-conmon-06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b.scope.
Jan 21 09:11:22 np0005590528 podman[245610]: 2026-01-21 14:11:22.382340274 +0000 UTC m=+0.023245059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:11:22 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:11:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6181a8334bec669f7ceb09c155a40b85d334a01fb720d8278db13bac90710/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6181a8334bec669f7ceb09c155a40b85d334a01fb720d8278db13bac90710/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6181a8334bec669f7ceb09c155a40b85d334a01fb720d8278db13bac90710/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6181a8334bec669f7ceb09c155a40b85d334a01fb720d8278db13bac90710/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:22 np0005590528 podman[245610]: 2026-01-21 14:11:22.506651652 +0000 UTC m=+0.147556407 container init 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 09:11:22 np0005590528 podman[245610]: 2026-01-21 14:11:22.513294017 +0000 UTC m=+0.154198772 container start 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 21 09:11:22 np0005590528 podman[245610]: 2026-01-21 14:11:22.518172039 +0000 UTC m=+0.159076794 container attach 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]: {
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:    "0": [
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:        {
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "devices": [
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "/dev/loop3"
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            ],
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_name": "ceph_lv0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_size": "21470642176",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "name": "ceph_lv0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "tags": {
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.cluster_name": "ceph",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.crush_device_class": "",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.encrypted": "0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.objectstore": "bluestore",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.osd_id": "0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.type": "block",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.vdo": "0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.with_tpm": "0"
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            },
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "type": "block",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "vg_name": "ceph_vg0"
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:        }
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:    ],
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:    "1": [
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:        {
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "devices": [
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "/dev/loop4"
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            ],
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_name": "ceph_lv1",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_size": "21470642176",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "name": "ceph_lv1",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "tags": {
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.cluster_name": "ceph",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.crush_device_class": "",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.encrypted": "0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.objectstore": "bluestore",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.osd_id": "1",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.type": "block",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.vdo": "0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.with_tpm": "0"
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            },
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "type": "block",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "vg_name": "ceph_vg1"
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:        }
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:    ],
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:    "2": [
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:        {
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "devices": [
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "/dev/loop5"
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            ],
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_name": "ceph_lv2",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_size": "21470642176",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "name": "ceph_lv2",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "tags": {
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.cluster_name": "ceph",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.crush_device_class": "",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.encrypted": "0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.objectstore": "bluestore",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.osd_id": "2",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.type": "block",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.vdo": "0",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:                "ceph.with_tpm": "0"
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            },
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "type": "block",
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:            "vg_name": "ceph_vg2"
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:        }
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]:    ]
Jan 21 09:11:22 np0005590528 romantic_swartz[245626]: }
Jan 21 09:11:22 np0005590528 systemd[1]: libpod-06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b.scope: Deactivated successfully.
Jan 21 09:11:22 np0005590528 podman[245610]: 2026-01-21 14:11:22.839691867 +0000 UTC m=+0.480596692 container died 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:11:22 np0005590528 systemd[1]: var-lib-containers-storage-overlay-62d6181a8334bec669f7ceb09c155a40b85d334a01fb720d8278db13bac90710-merged.mount: Deactivated successfully.
Jan 21 09:11:22 np0005590528 podman[245610]: 2026-01-21 14:11:22.885386983 +0000 UTC m=+0.526291758 container remove 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 09:11:22 np0005590528 systemd[1]: libpod-conmon-06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b.scope: Deactivated successfully.
Jan 21 09:11:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:11:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1443226209' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:11:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:11:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1443226209' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:11:23 np0005590528 podman[245709]: 2026-01-21 14:11:23.386357641 +0000 UTC m=+0.037950074 container create 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:11:23 np0005590528 systemd[1]: Started libpod-conmon-854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77.scope.
Jan 21 09:11:23 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:11:23 np0005590528 podman[245709]: 2026-01-21 14:11:23.46200194 +0000 UTC m=+0.113594463 container init 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 09:11:23 np0005590528 podman[245709]: 2026-01-21 14:11:23.372300352 +0000 UTC m=+0.023892805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:11:23 np0005590528 podman[245709]: 2026-01-21 14:11:23.46845778 +0000 UTC m=+0.120050213 container start 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 09:11:23 np0005590528 podman[245709]: 2026-01-21 14:11:23.472489591 +0000 UTC m=+0.124082114 container attach 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 09:11:23 np0005590528 focused_wu[245726]: 167 167
Jan 21 09:11:23 np0005590528 systemd[1]: libpod-854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77.scope: Deactivated successfully.
Jan 21 09:11:23 np0005590528 podman[245709]: 2026-01-21 14:11:23.473582328 +0000 UTC m=+0.125174771 container died 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 09:11:23 np0005590528 systemd[1]: var-lib-containers-storage-overlay-7eef203bbacf5ad34bfd76daf38eb8c5e54d036f3b09bf5e34ad951ca22a2dc4-merged.mount: Deactivated successfully.
Jan 21 09:11:23 np0005590528 podman[245709]: 2026-01-21 14:11:23.511644484 +0000 UTC m=+0.163236917 container remove 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:11:23 np0005590528 systemd[1]: libpod-conmon-854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77.scope: Deactivated successfully.
Jan 21 09:11:23 np0005590528 podman[245751]: 2026-01-21 14:11:23.721357695 +0000 UTC m=+0.056807273 container create 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 09:11:23 np0005590528 systemd[1]: Started libpod-conmon-51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0.scope.
Jan 21 09:11:23 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:11:23 np0005590528 podman[245751]: 2026-01-21 14:11:23.704363122 +0000 UTC m=+0.039812730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:11:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9cc37da6250c37e36714bdad7b59b554e6cce230ca66b3e06398e7b1eb7e38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9cc37da6250c37e36714bdad7b59b554e6cce230ca66b3e06398e7b1eb7e38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9cc37da6250c37e36714bdad7b59b554e6cce230ca66b3e06398e7b1eb7e38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:23 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9cc37da6250c37e36714bdad7b59b554e6cce230ca66b3e06398e7b1eb7e38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:11:23 np0005590528 podman[245751]: 2026-01-21 14:11:23.817126514 +0000 UTC m=+0.152576172 container init 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 09:11:23 np0005590528 podman[245751]: 2026-01-21 14:11:23.824897267 +0000 UTC m=+0.160346865 container start 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:11:23 np0005590528 podman[245751]: 2026-01-21 14:11:23.828706202 +0000 UTC m=+0.164155860 container attach 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 09:11:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 09:11:24 np0005590528 lvm[245846]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:11:24 np0005590528 lvm[245846]: VG ceph_vg0 finished
Jan 21 09:11:24 np0005590528 lvm[245847]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:11:24 np0005590528 lvm[245847]: VG ceph_vg1 finished
Jan 21 09:11:24 np0005590528 lvm[245849]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:11:24 np0005590528 lvm[245849]: VG ceph_vg2 finished
Jan 21 09:11:24 np0005590528 wonderful_wing[245768]: {}
Jan 21 09:11:24 np0005590528 systemd[1]: libpod-51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0.scope: Deactivated successfully.
Jan 21 09:11:24 np0005590528 systemd[1]: libpod-51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0.scope: Consumed 1.305s CPU time.
Jan 21 09:11:24 np0005590528 podman[245751]: 2026-01-21 14:11:24.628182578 +0000 UTC m=+0.963632176 container died 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:11:24 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9e9cc37da6250c37e36714bdad7b59b554e6cce230ca66b3e06398e7b1eb7e38-merged.mount: Deactivated successfully.
Jan 21 09:11:24 np0005590528 podman[245751]: 2026-01-21 14:11:24.673056603 +0000 UTC m=+1.008506191 container remove 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:11:24 np0005590528 systemd[1]: libpod-conmon-51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0.scope: Deactivated successfully.
Jan 21 09:11:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:11:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:11:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:11:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:11:25 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:11:25 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:11:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 09:11:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "56d6dc8f-03ac-4a2a-b985-23defb122518", "format": "json"}]: dispatch
Jan 21 09:11:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:11:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:11:28 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/f9253af6-64bc-4ad7-b4bd-56feef7fa9fe'.
Jan 21 09:11:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 09:11:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp'
Jan 21 09:11:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp' to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta'
Jan 21 09:11:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:11:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "format": "json"}]: dispatch
Jan 21 09:11:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:11:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:11:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:11:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:11:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 3 op/s
Jan 21 09:11:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:31 np0005590528 podman[245889]: 2026-01-21 14:11:31.359800971 +0000 UTC m=+0.086163530 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 21 09:11:31 np0005590528 podman[245888]: 2026-01-21 14:11:31.371839715 +0000 UTC m=+0.098120102 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 09:11:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Jan 21 09:11:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "snap_name": "18fb6d14-013d-43de-a247-048a332ec2b1", "format": "json"}]: dispatch
Jan 21 09:11:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:11:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:11:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce", "format": "json"}]: dispatch
Jan 21 09:11:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:11:33.902 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:11:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:11:33.903 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:11:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:11:33.903 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:11:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s wr, 2 op/s
Jan 21 09:11:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 3 op/s
Jan 21 09:11:36 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:11:36.919 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:11:36 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:11:36.920 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:11:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s wr, 1 op/s
Jan 21 09:11:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "d00da2a0-c417-42b5-bf93-02a64cbb16fe", "format": "json"}]: dispatch
Jan 21 09:11:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:38 np0005590528 nova_compute[239261]: 2026-01-21 14:11:38.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:11:38 np0005590528 nova_compute[239261]: 2026-01-21 14:11:38.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:11:38 np0005590528 nova_compute[239261]: 2026-01-21 14:11:38.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:11:38 np0005590528 nova_compute[239261]: 2026-01-21 14:11:38.753 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:11:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:11:39
Jan 21 09:11:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:11:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:11:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'cephfs.cephfs.meta', '.mgr']
Jan 21 09:11:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:11:39 np0005590528 nova_compute[239261]: 2026-01-21 14:11:39.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:11:40 np0005590528 nova_compute[239261]: 2026-01-21 14:11:40.128 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:11:40 np0005590528 nova_compute[239261]: 2026-01-21 14:11:40.128 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:11:40 np0005590528 nova_compute[239261]: 2026-01-21 14:11:40.128 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:11:40 np0005590528 nova_compute[239261]: 2026-01-21 14:11:40.128 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:11:40 np0005590528 nova_compute[239261]: 2026-01-21 14:11:40.128 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:11:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 2 op/s
Jan 21 09:11:40 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "97526c93-fc84-45f0-b580-04d89d51b5a7", "format": "json"}]: dispatch
Jan 21 09:11:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:11:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152294106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:11:40 np0005590528 nova_compute[239261]: 2026-01-21 14:11:40.664 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:11:40 np0005590528 nova_compute[239261]: 2026-01-21 14:11:40.803 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:11:40 np0005590528 nova_compute[239261]: 2026-01-21 14:11:40.805 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5117MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:11:40 np0005590528 nova_compute[239261]: 2026-01-21 14:11:40.805 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:11:40 np0005590528 nova_compute[239261]: 2026-01-21 14:11:40.805 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:11:40 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:11:40.921 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:11:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:11:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:11:41 np0005590528 nova_compute[239261]: 2026-01-21 14:11:41.512 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:11:41 np0005590528 nova_compute[239261]: 2026-01-21 14:11:41.513 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:11:41 np0005590528 nova_compute[239261]: 2026-01-21 14:11:41.537 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:11:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:11:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2480374122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:11:42 np0005590528 nova_compute[239261]: 2026-01-21 14:11:42.087 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:11:42 np0005590528 nova_compute[239261]: 2026-01-21 14:11:42.094 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:11:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s wr, 1 op/s
Jan 21 09:11:42 np0005590528 nova_compute[239261]: 2026-01-21 14:11:42.737 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:11:42 np0005590528 nova_compute[239261]: 2026-01-21 14:11:42.740 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:11:42 np0005590528 nova_compute[239261]: 2026-01-21 14:11:42.740 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:11:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s wr, 1 op/s
Jan 21 09:11:44 np0005590528 nova_compute[239261]: 2026-01-21 14:11:44.742 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:11:44 np0005590528 nova_compute[239261]: 2026-01-21 14:11:44.742 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:11:44 np0005590528 nova_compute[239261]: 2026-01-21 14:11:44.767 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:11:44 np0005590528 nova_compute[239261]: 2026-01-21 14:11:44.767 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:11:44 np0005590528 nova_compute[239261]: 2026-01-21 14:11:44.767 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:11:44 np0005590528 nova_compute[239261]: 2026-01-21 14:11:44.767 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:11:44 np0005590528 nova_compute[239261]: 2026-01-21 14:11:44.768 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:11:44 np0005590528 nova_compute[239261]: 2026-01-21 14:11:44.768 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:11:44 np0005590528 nova_compute[239261]: 2026-01-21 14:11:44.768 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:11:44 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "97526c93-fc84-45f0-b580-04d89d51b5a7_7af6d476-9e96-455d-901d-cb117be73224", "force": true, "format": "json"}]: dispatch
Jan 21 09:11:44 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7_7af6d476-9e96-455d-901d-cb117be73224, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:11:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:11:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7_7af6d476-9e96-455d-901d-cb117be73224, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "97526c93-fc84-45f0-b580-04d89d51b5a7", "force": true, "format": "json"}]: dispatch
Jan 21 09:11:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:11:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:11:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Jan 21 09:11:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662229856041607 of space, bias 1.0, pg target 0.1998668956812482 quantized to 32 (current 32)
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4002298155263771e-05 of space, bias 4.0, pg target 0.016802757786316524 quantized to 16 (current 16)
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:11:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:11:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 09:11:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 10 KiB/s wr, 2 op/s
Jan 21 09:11:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 21 09:11:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 21 09:11:52 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 13 KiB/s wr, 2 op/s
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "d00da2a0-c417-42b5-bf93-02a64cbb16fe_3545ae94-f727-47c4-a7fd-1c526ecea0fa", "force": true, "format": "json"}]: dispatch
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe_3545ae94-f727-47c4-a7fd-1c526ecea0fa, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe_3545ae94-f727-47c4-a7fd-1c526ecea0fa, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "d00da2a0-c417-42b5-bf93-02a64cbb16fe", "force": true, "format": "json"}]: dispatch
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:11:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:11:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 09:11:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 09:11:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce_13d8e771-64b1-4720-ac35-75306a2796ca", "force": true, "format": "json"}]: dispatch
Jan 21 09:11:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce_13d8e771-64b1-4720-ac35-75306a2796ca, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:11:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:11:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce_13d8e771-64b1-4720-ac35-75306a2796ca, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce", "force": true, "format": "json"}]: dispatch
Jan 21 09:11:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:11:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:11:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:11:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "snap_name": "18fb6d14-013d-43de-a247-048a332ec2b1_b687d358-d36c-4697-89e0-1aa237110732", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1_b687d358-d36c-4697-89e0-1aa237110732, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp'
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp' to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta'
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1_b687d358-d36c-4697-89e0-1aa237110732, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "snap_name": "18fb6d14-013d-43de-a247-048a332ec2b1", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp'
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp' to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta'
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:12:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 20 KiB/s wr, 3 op/s
Jan 21 09:12:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 21 09:12:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 21 09:12:01 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 21 09:12:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 21 09:12:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 21 09:12:02 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 21 09:12:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 25 KiB/s wr, 4 op/s
Jan 21 09:12:02 np0005590528 podman[245977]: 2026-01-21 14:12:02.327691637 +0000 UTC m=+0.052381777 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 09:12:02 np0005590528 podman[245976]: 2026-01-21 14:12:02.359731618 +0000 UTC m=+0.086629842 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:12:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "56d6dc8f-03ac-4a2a-b985-23defb122518_61418a5a-530f-4665-9f42-37eaec4b7f3b", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518_61418a5a-530f-4665-9f42-37eaec4b7f3b, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 21 09:12:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:12:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:12:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518_61418a5a-530f-4665-9f42-37eaec4b7f3b, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 21 09:12:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "56d6dc8f-03ac-4a2a-b985-23defb122518", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:03 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 21 09:12:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:12:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:12:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:04 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "format": "json"}]: dispatch
Jan 21 09:12:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:04 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '901b9a46-d563-4a2c-bc82-2f893614e2f0' of type subvolume
Jan 21 09:12:04 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:04.064+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '901b9a46-d563-4a2c-bc82-2f893614e2f0' of type subvolume
Jan 21 09:12:04 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:12:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0'' moved to trashcan
Jan 21 09:12:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:12:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 09:12:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 28 KiB/s wr, 4 op/s
Jan 21 09:12:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 43 KiB/s wr, 7 op/s
Jan 21 09:12:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 21 09:12:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 21 09:12:07 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 21 09:12:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "c5e71a6b-b6f4-4c59-b979-36f333691be0_a8b3e1a0-047e-4d57-b74a-72ec2981cdc4", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0_a8b3e1a0-047e-4d57-b74a-72ec2981cdc4, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:12:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:12:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0_a8b3e1a0-047e-4d57-b74a-72ec2981cdc4, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "c5e71a6b-b6f4-4c59-b979-36f333691be0", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 659 B/s rd, 42 KiB/s wr, 7 op/s
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "format": "json"}]: dispatch
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:08 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:08.527+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'be026c8c-9a77-4436-9eb0-bd80e75652ed' of type subvolume
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'be026c8c-9a77-4436-9eb0-bd80e75652ed' of type subvolume
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/be026c8c-9a77-4436-9eb0-bd80e75652ed'' moved to trashcan
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:12:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 09:12:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 51 KiB/s wr, 10 op/s
Jan 21 09:12:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:12:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:12:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:12:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:12:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:12:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:12:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 21 09:12:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 21 09:12:11 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 21 09:12:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 21 09:12:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 21 09:12:12 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 21 09:12:12 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "format": "json"}]: dispatch
Jan 21 09:12:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:12 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:12.129+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd3ce0e74-c7d0-4049-ba17-7d4296160447' of type subvolume
Jan 21 09:12:12 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd3ce0e74-c7d0-4049-ba17-7d4296160447' of type subvolume
Jan 21 09:12:12 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447'' moved to trashcan
Jan 21 09:12:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:12:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 09:12:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 25 KiB/s wr, 6 op/s
Jan 21 09:12:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 22 KiB/s wr, 5 op/s
Jan 21 09:12:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 36 KiB/s wr, 9 op/s
Jan 21 09:12:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea104d42-8223-4da1-870a-ba39917e4943", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:12:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 09:12:17 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ea104d42-8223-4da1-870a-ba39917e4943/1282aaf8-9b49-40e2-a843-a3b0a737b268'.
Jan 21 09:12:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea104d42-8223-4da1-870a-ba39917e4943/.meta.tmp'
Jan 21 09:12:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea104d42-8223-4da1-870a-ba39917e4943/.meta.tmp' to config b'/volumes/_nogroup/ea104d42-8223-4da1-870a-ba39917e4943/.meta'
Jan 21 09:12:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 09:12:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea104d42-8223-4da1-870a-ba39917e4943", "format": "json"}]: dispatch
Jan 21 09:12:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 09:12:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 09:12:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:12:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:12:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 18 KiB/s wr, 4 op/s
Jan 21 09:12:20 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea104d42-8223-4da1-870a-ba39917e4943", "format": "json"}]: dispatch
Jan 21 09:12:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea104d42-8223-4da1-870a-ba39917e4943, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea104d42-8223-4da1-870a-ba39917e4943, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:20.273+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea104d42-8223-4da1-870a-ba39917e4943' of type subvolume
Jan 21 09:12:20 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea104d42-8223-4da1-870a-ba39917e4943' of type subvolume
Jan 21 09:12:20 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea104d42-8223-4da1-870a-ba39917e4943", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 09:12:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea104d42-8223-4da1-870a-ba39917e4943'' moved to trashcan
Jan 21 09:12:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:12:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 09:12:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 444 B/s rd, 21 KiB/s wr, 5 op/s
Jan 21 09:12:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 21 09:12:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 21 09:12:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 21 09:12:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 19 KiB/s wr, 4 op/s
Jan 21 09:12:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:12:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954126033' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:12:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:12:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954126033' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:12:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 19 KiB/s wr, 4 op/s
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:12:25 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:12:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:26 np0005590528 podman[246161]: 2026-01-21 14:12:26.089773512 +0000 UTC m=+0.050075060 container create 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 09:12:26 np0005590528 systemd[1]: Started libpod-conmon-3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369.scope.
Jan 21 09:12:26 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:12:26 np0005590528 podman[246161]: 2026-01-21 14:12:26.062292303 +0000 UTC m=+0.022593871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:12:26 np0005590528 podman[246161]: 2026-01-21 14:12:26.20376817 +0000 UTC m=+0.164069748 container init 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 09:12:26 np0005590528 podman[246161]: 2026-01-21 14:12:26.210837253 +0000 UTC m=+0.171138801 container start 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 09:12:26 np0005590528 modest_taussig[246178]: 167 167
Jan 21 09:12:26 np0005590528 systemd[1]: libpod-3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369.scope: Deactivated successfully.
Jan 21 09:12:26 np0005590528 conmon[246178]: conmon 3134018e7de34ea5acc7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369.scope/container/memory.events
Jan 21 09:12:26 np0005590528 podman[246161]: 2026-01-21 14:12:26.227015766 +0000 UTC m=+0.187317334 container attach 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Jan 21 09:12:26 np0005590528 podman[246161]: 2026-01-21 14:12:26.228601815 +0000 UTC m=+0.188903373 container died 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 09:12:26 np0005590528 systemd[1]: var-lib-containers-storage-overlay-74fc27476b31014afcfedbd2bec6a309681f51f26fd176e2b6f611f325567d4a-merged.mount: Deactivated successfully.
Jan 21 09:12:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 3 op/s
Jan 21 09:12:26 np0005590528 podman[246161]: 2026-01-21 14:12:26.377308819 +0000 UTC m=+0.337610397 container remove 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:12:26 np0005590528 systemd[1]: libpod-conmon-3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369.scope: Deactivated successfully.
Jan 21 09:12:26 np0005590528 podman[246205]: 2026-01-21 14:12:26.555660584 +0000 UTC m=+0.045253193 container create d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:12:26 np0005590528 systemd[1]: Started libpod-conmon-d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4.scope.
Jan 21 09:12:26 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:12:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:26 np0005590528 podman[246205]: 2026-01-21 14:12:26.536777875 +0000 UTC m=+0.026370484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:12:26 np0005590528 podman[246205]: 2026-01-21 14:12:26.645596176 +0000 UTC m=+0.135188815 container init d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 09:12:26 np0005590528 podman[246205]: 2026-01-21 14:12:26.65189694 +0000 UTC m=+0.141489539 container start d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 09:12:26 np0005590528 podman[246205]: 2026-01-21 14:12:26.659176487 +0000 UTC m=+0.148769106 container attach d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:12:26 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:12:26 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:12:26 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:12:27 np0005590528 angry_austin[246221]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:12:27 np0005590528 angry_austin[246221]: --> All data devices are unavailable
Jan 21 09:12:27 np0005590528 systemd[1]: libpod-d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4.scope: Deactivated successfully.
Jan 21 09:12:27 np0005590528 podman[246205]: 2026-01-21 14:12:27.11978241 +0000 UTC m=+0.609375049 container died d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 09:12:27 np0005590528 systemd[1]: var-lib-containers-storage-overlay-968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247-merged.mount: Deactivated successfully.
Jan 21 09:12:27 np0005590528 podman[246205]: 2026-01-21 14:12:27.318446261 +0000 UTC m=+0.808038870 container remove d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 09:12:27 np0005590528 systemd[1]: libpod-conmon-d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4.scope: Deactivated successfully.
Jan 21 09:12:27 np0005590528 podman[246314]: 2026-01-21 14:12:27.711108189 +0000 UTC m=+0.018286327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:12:27 np0005590528 podman[246314]: 2026-01-21 14:12:27.850968887 +0000 UTC m=+0.158146985 container create 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:12:27 np0005590528 systemd[1]: Started libpod-conmon-600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806.scope.
Jan 21 09:12:27 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:12:27 np0005590528 podman[246314]: 2026-01-21 14:12:27.973648076 +0000 UTC m=+0.280826254 container init 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 09:12:27 np0005590528 podman[246314]: 2026-01-21 14:12:27.980657547 +0000 UTC m=+0.287835655 container start 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 09:12:27 np0005590528 nice_faraday[246330]: 167 167
Jan 21 09:12:27 np0005590528 podman[246314]: 2026-01-21 14:12:27.985160607 +0000 UTC m=+0.292338715 container attach 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:12:27 np0005590528 systemd[1]: libpod-600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806.scope: Deactivated successfully.
Jan 21 09:12:27 np0005590528 podman[246314]: 2026-01-21 14:12:27.987717129 +0000 UTC m=+0.294895237 container died 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:12:28 np0005590528 systemd[1]: var-lib-containers-storage-overlay-81dcb4357e6fd6febef62fbf00e8a84581eb7faf931db1f65735a0df26f4260d-merged.mount: Deactivated successfully.
Jan 21 09:12:28 np0005590528 podman[246314]: 2026-01-21 14:12:28.029948048 +0000 UTC m=+0.337126166 container remove 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 21 09:12:28 np0005590528 systemd[1]: libpod-conmon-600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806.scope: Deactivated successfully.
Jan 21 09:12:28 np0005590528 podman[246354]: 2026-01-21 14:12:28.215373876 +0000 UTC m=+0.043284075 container create 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:12:28 np0005590528 systemd[1]: Started libpod-conmon-391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6.scope.
Jan 21 09:12:28 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:12:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed074985d419b903dce009984ca748b8db5435aa2a7e314c63a6e9d013087ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed074985d419b903dce009984ca748b8db5435aa2a7e314c63a6e9d013087ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed074985d419b903dce009984ca748b8db5435aa2a7e314c63a6e9d013087ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed074985d419b903dce009984ca748b8db5435aa2a7e314c63a6e9d013087ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:28 np0005590528 podman[246354]: 2026-01-21 14:12:28.278187716 +0000 UTC m=+0.106097945 container init 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 09:12:28 np0005590528 podman[246354]: 2026-01-21 14:12:28.283734692 +0000 UTC m=+0.111644901 container start 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 09:12:28 np0005590528 podman[246354]: 2026-01-21 14:12:28.287371701 +0000 UTC m=+0.115281930 container attach 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:12:28 np0005590528 podman[246354]: 2026-01-21 14:12:28.196276441 +0000 UTC m=+0.024186700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:12:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 3 op/s
Jan 21 09:12:28 np0005590528 frosty_easley[246370]: {
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:    "0": [
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:        {
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "devices": [
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "/dev/loop3"
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            ],
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_name": "ceph_lv0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_size": "21470642176",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "name": "ceph_lv0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "tags": {
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.cluster_name": "ceph",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.crush_device_class": "",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.encrypted": "0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.objectstore": "bluestore",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.osd_id": "0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.type": "block",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.vdo": "0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.with_tpm": "0"
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            },
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "type": "block",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "vg_name": "ceph_vg0"
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:        }
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:    ],
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:    "1": [
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:        {
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "devices": [
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "/dev/loop4"
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            ],
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_name": "ceph_lv1",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_size": "21470642176",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "name": "ceph_lv1",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "tags": {
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.cluster_name": "ceph",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.crush_device_class": "",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.encrypted": "0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.objectstore": "bluestore",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.osd_id": "1",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.type": "block",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.vdo": "0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.with_tpm": "0"
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            },
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "type": "block",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "vg_name": "ceph_vg1"
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:        }
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:    ],
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:    "2": [
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:        {
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "devices": [
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "/dev/loop5"
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            ],
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_name": "ceph_lv2",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_size": "21470642176",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "name": "ceph_lv2",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "tags": {
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.cluster_name": "ceph",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.crush_device_class": "",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.encrypted": "0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.objectstore": "bluestore",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.osd_id": "2",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.type": "block",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.vdo": "0",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:                "ceph.with_tpm": "0"
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            },
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "type": "block",
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:            "vg_name": "ceph_vg2"
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:        }
Jan 21 09:12:28 np0005590528 frosty_easley[246370]:    ]
Jan 21 09:12:28 np0005590528 frosty_easley[246370]: }
Jan 21 09:12:28 np0005590528 systemd[1]: libpod-391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6.scope: Deactivated successfully.
Jan 21 09:12:28 np0005590528 podman[246354]: 2026-01-21 14:12:28.571115755 +0000 UTC m=+0.399025974 container died 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 09:12:28 np0005590528 systemd[1]: var-lib-containers-storage-overlay-fed074985d419b903dce009984ca748b8db5435aa2a7e314c63a6e9d013087ae-merged.mount: Deactivated successfully.
Jan 21 09:12:28 np0005590528 podman[246354]: 2026-01-21 14:12:28.62017012 +0000 UTC m=+0.448080329 container remove 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:12:28 np0005590528 systemd[1]: libpod-conmon-391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6.scope: Deactivated successfully.
Jan 21 09:12:29 np0005590528 podman[246452]: 2026-01-21 14:12:29.079680037 +0000 UTC m=+0.044936266 container create 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:12:29 np0005590528 systemd[1]: Started libpod-conmon-136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb.scope.
Jan 21 09:12:29 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:12:29 np0005590528 podman[246452]: 2026-01-21 14:12:29.143214414 +0000 UTC m=+0.108470673 container init 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:12:29 np0005590528 podman[246452]: 2026-01-21 14:12:29.149698653 +0000 UTC m=+0.114954882 container start 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:12:29 np0005590528 podman[246452]: 2026-01-21 14:12:29.152837499 +0000 UTC m=+0.118093788 container attach 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 09:12:29 np0005590528 elegant_morse[246468]: 167 167
Jan 21 09:12:29 np0005590528 systemd[1]: libpod-136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb.scope: Deactivated successfully.
Jan 21 09:12:29 np0005590528 podman[246452]: 2026-01-21 14:12:29.059763742 +0000 UTC m=+0.025020001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:12:29 np0005590528 podman[246452]: 2026-01-21 14:12:29.154746545 +0000 UTC m=+0.120002794 container died 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 09:12:29 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9241a76fd878c73f06d339b0c286ef12a10008a7a59c55087277a10bf40b9126-merged.mount: Deactivated successfully.
Jan 21 09:12:29 np0005590528 podman[246452]: 2026-01-21 14:12:29.19841078 +0000 UTC m=+0.163667029 container remove 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 09:12:29 np0005590528 systemd[1]: libpod-conmon-136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb.scope: Deactivated successfully.
Jan 21 09:12:29 np0005590528 podman[246492]: 2026-01-21 14:12:29.390608033 +0000 UTC m=+0.039146185 container create 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 09:12:29 np0005590528 systemd[1]: Started libpod-conmon-4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9.scope.
Jan 21 09:12:29 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:12:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444ef7a6efef35fb32d6385acc0f8a538b590571a84c57728e5fb81a497ba0cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444ef7a6efef35fb32d6385acc0f8a538b590571a84c57728e5fb81a497ba0cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444ef7a6efef35fb32d6385acc0f8a538b590571a84c57728e5fb81a497ba0cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444ef7a6efef35fb32d6385acc0f8a538b590571a84c57728e5fb81a497ba0cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:12:29 np0005590528 podman[246492]: 2026-01-21 14:12:29.373996248 +0000 UTC m=+0.022534420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:12:29 np0005590528 podman[246492]: 2026-01-21 14:12:29.478402532 +0000 UTC m=+0.126940784 container init 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:12:29 np0005590528 podman[246492]: 2026-01-21 14:12:29.494402952 +0000 UTC m=+0.142941144 container start 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:12:29 np0005590528 podman[246492]: 2026-01-21 14:12:29.49883688 +0000 UTC m=+0.147375042 container attach 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:12:30 np0005590528 lvm[246586]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:12:30 np0005590528 lvm[246586]: VG ceph_vg0 finished
Jan 21 09:12:30 np0005590528 lvm[246588]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:12:30 np0005590528 lvm[246588]: VG ceph_vg1 finished
Jan 21 09:12:30 np0005590528 lvm[246590]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:12:30 np0005590528 lvm[246590]: VG ceph_vg2 finished
Jan 21 09:12:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 16 KiB/s wr, 2 op/s
Jan 21 09:12:30 np0005590528 practical_dhawan[246508]: {}
Jan 21 09:12:30 np0005590528 systemd[1]: libpod-4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9.scope: Deactivated successfully.
Jan 21 09:12:30 np0005590528 systemd[1]: libpod-4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9.scope: Consumed 1.398s CPU time.
Jan 21 09:12:30 np0005590528 podman[246492]: 2026-01-21 14:12:30.375261835 +0000 UTC m=+1.023799987 container died 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:12:30 np0005590528 systemd[1]: var-lib-containers-storage-overlay-444ef7a6efef35fb32d6385acc0f8a538b590571a84c57728e5fb81a497ba0cd-merged.mount: Deactivated successfully.
Jan 21 09:12:30 np0005590528 podman[246492]: 2026-01-21 14:12:30.424531166 +0000 UTC m=+1.073069318 container remove 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 09:12:30 np0005590528 systemd[1]: libpod-conmon-4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9.scope: Deactivated successfully.
Jan 21 09:12:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:12:30 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:12:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:12:30 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:12:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:31 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:12:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 09:12:31 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/672756b3-d8dc-429b-8b05-6a6f7934e823/1fba2d90-325a-4d51-8a85-96a3d9d56a0b'.
Jan 21 09:12:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/672756b3-d8dc-429b-8b05-6a6f7934e823/.meta.tmp'
Jan 21 09:12:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/672756b3-d8dc-429b-8b05-6a6f7934e823/.meta.tmp' to config b'/volumes/_nogroup/672756b3-d8dc-429b-8b05-6a6f7934e823/.meta'
Jan 21 09:12:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 09:12:31 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "format": "json"}]: dispatch
Jan 21 09:12:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 09:12:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 09:12:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:12:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:12:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:12:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:12:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 09:12:33 np0005590528 podman[246630]: 2026-01-21 14:12:33.351391003 +0000 UTC m=+0.072223341 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:12:33 np0005590528 podman[246629]: 2026-01-21 14:12:33.38740824 +0000 UTC m=+0.108581066 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 21 09:12:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:12:33.902 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:12:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:12:33.903 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:12:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:12:33.903 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:12:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 09:12:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 19 KiB/s wr, 2 op/s
Jan 21 09:12:36 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "format": "json"}]: dispatch
Jan 21 09:12:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:672756b3-d8dc-429b-8b05-6a6f7934e823, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:672756b3-d8dc-429b-8b05-6a6f7934e823, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:36 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:36.398+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '672756b3-d8dc-429b-8b05-6a6f7934e823' of type subvolume
Jan 21 09:12:36 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '672756b3-d8dc-429b-8b05-6a6f7934e823' of type subvolume
Jan 21 09:12:36 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 09:12:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/672756b3-d8dc-429b-8b05-6a6f7934e823'' moved to trashcan
Jan 21 09:12:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:12:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 09:12:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 0 op/s
Jan 21 09:12:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:12:39
Jan 21 09:12:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:12:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:12:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.rgw.root', 'images']
Jan 21 09:12:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:12:39 np0005590528 nova_compute[239261]: 2026-01-21 14:12:39.726 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:12:39 np0005590528 nova_compute[239261]: 2026-01-21 14:12:39.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:12:39 np0005590528 nova_compute[239261]: 2026-01-21 14:12:39.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:12:39 np0005590528 nova_compute[239261]: 2026-01-21 14:12:39.758 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:12:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 17 KiB/s wr, 2 op/s
Jan 21 09:12:40 np0005590528 nova_compute[239261]: 2026-01-21 14:12:40.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:12:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:12:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:12:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 2 op/s
Jan 21 09:12:43 np0005590528 nova_compute[239261]: 2026-01-21 14:12:43.436 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:12:43 np0005590528 nova_compute[239261]: 2026-01-21 14:12:43.437 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:12:43 np0005590528 nova_compute[239261]: 2026-01-21 14:12:43.437 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:12:43 np0005590528 nova_compute[239261]: 2026-01-21 14:12:43.437 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:12:43 np0005590528 nova_compute[239261]: 2026-01-21 14:12:43.438 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:12:44 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:12:44.051 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:12:44 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:12:44.052 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:12:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:12:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1466131476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:12:44 np0005590528 nova_compute[239261]: 2026-01-21 14:12:44.154 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.716s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:12:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 2 op/s
Jan 21 09:12:44 np0005590528 nova_compute[239261]: 2026-01-21 14:12:44.315 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:12:44 np0005590528 nova_compute[239261]: 2026-01-21 14:12:44.316 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:12:44 np0005590528 nova_compute[239261]: 2026-01-21 14:12:44.317 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:12:44 np0005590528 nova_compute[239261]: 2026-01-21 14:12:44.317 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:12:45 np0005590528 nova_compute[239261]: 2026-01-21 14:12:45.002 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:12:45 np0005590528 nova_compute[239261]: 2026-01-21 14:12:45.003 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:12:45 np0005590528 nova_compute[239261]: 2026-01-21 14:12:45.025 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:12:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:12:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:45 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/992722e6-fc0f-4dc3-97ca-752fee9b705f'.
Jan 21 09:12:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp'
Jan 21 09:12:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp' to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta'
Jan 21 09:12:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "format": "json"}]: dispatch
Jan 21 09:12:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:12:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:12:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:12:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1641634475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:12:45 np0005590528 nova_compute[239261]: 2026-01-21 14:12:45.881 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.856s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:12:45 np0005590528 nova_compute[239261]: 2026-01-21 14:12:45.887 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:12:45 np0005590528 nova_compute[239261]: 2026-01-21 14:12:45.909 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:12:45 np0005590528 nova_compute[239261]: 2026-01-21 14:12:45.910 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:12:45 np0005590528 nova_compute[239261]: 2026-01-21 14:12:45.911 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:12:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Jan 21 09:12:46 np0005590528 nova_compute[239261]: 2026-01-21 14:12:46.911 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:12:46 np0005590528 nova_compute[239261]: 2026-01-21 14:12:46.912 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:12:46 np0005590528 nova_compute[239261]: 2026-01-21 14:12:46.912 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:12:46 np0005590528 nova_compute[239261]: 2026-01-21 14:12:46.912 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:12:46 np0005590528 nova_compute[239261]: 2026-01-21 14:12:46.912 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:12:46 np0005590528 nova_compute[239261]: 2026-01-21 14:12:46.912 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:12:46 np0005590528 nova_compute[239261]: 2026-01-21 14:12:46.913 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:12:46 np0005590528 nova_compute[239261]: 2026-01-21 14:12:46.913 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:12:47 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:12:47.053 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:12:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 2 op/s
Jan 21 09:12:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "snap_name": "e3be0d8d-321f-4f19-926e-84f856a6aa95", "format": "json"}]: dispatch
Jan 21 09:12:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 20 KiB/s wr, 3 op/s
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1dd3a4c4-ba47-419f-88a7-3a23e3b00147/4f12a03b-2b1c-4bba-a51f-c6afbf76db5e'.
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1dd3a4c4-ba47-419f-88a7-3a23e3b00147/.meta.tmp'
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1dd3a4c4-ba47-419f-88a7-3a23e3b00147/.meta.tmp' to config b'/volumes/_nogroup/1dd3a4c4-ba47-419f-88a7-3a23e3b00147/.meta'
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "format": "json"}]: dispatch
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 09:12:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:12:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666224662312277 of space, bias 1.0, pg target 0.1998673986936831 quantized to 32 (current 32)
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 3.248075492805729e-05 of space, bias 4.0, pg target 0.03897690591366875 quantized to 16 (current 16)
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:12:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:12:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Jan 21 09:12:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Jan 21 09:12:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "snap_name": "e3be0d8d-321f-4f19-926e-84f856a6aa95_c7ae62df-b2fa-47d8-aba5-e6ef84f541d4", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95_c7ae62df-b2fa-47d8-aba5-e6ef84f541d4, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp'
Jan 21 09:12:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp' to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta'
Jan 21 09:12:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95_c7ae62df-b2fa-47d8-aba5-e6ef84f541d4, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "snap_name": "e3be0d8d-321f-4f19-926e-84f856a6aa95", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp'
Jan 21 09:12:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp' to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta'
Jan 21 09:12:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:12:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 2 op/s
Jan 21 09:12:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 2 op/s
Jan 21 09:12:58 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "909aa505-0296-4e74-80ca-1c8370556d29", "format": "json"}]: dispatch
Jan 21 09:12:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:909aa505-0296-4e74-80ca-1c8370556d29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:909aa505-0296-4e74-80ca-1c8370556d29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:12:58 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:58.932+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '909aa505-0296-4e74-80ca-1c8370556d29' of type subvolume
Jan 21 09:12:58 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '909aa505-0296-4e74-80ca-1c8370556d29' of type subvolume
Jan 21 09:12:58 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "force": true, "format": "json"}]: dispatch
Jan 21 09:12:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:12:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29'' moved to trashcan
Jan 21 09:12:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:12:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 09:13:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 43 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 5 op/s
Jan 21 09:13:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 21 09:13:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 21 09:13:02 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 21 09:13:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 43 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 28 KiB/s wr, 4 op/s
Jan 21 09:13:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 43 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 28 KiB/s wr, 4 op/s
Jan 21 09:13:04 np0005590528 podman[246722]: 2026-01-21 14:13:04.323071059 +0000 UTC m=+0.049232570 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 09:13:04 np0005590528 podman[246721]: 2026-01-21 14:13:04.352444175 +0000 UTC m=+0.081693741 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 21 09:13:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 09:13:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 09:13:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 09:13:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:13:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:13:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:13:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:13:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:13:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:13:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 21 09:13:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 21 09:13:11 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 21 09:13:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 09:13:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 09:13:15 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "format": "json"}]: dispatch
Jan 21 09:13:15 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:13:15 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:13:15 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:15.150+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1dd3a4c4-ba47-419f-88a7-3a23e3b00147' of type subvolume
Jan 21 09:13:15 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1dd3a4c4-ba47-419f-88a7-3a23e3b00147' of type subvolume
Jan 21 09:13:15 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "force": true, "format": "json"}]: dispatch
Jan 21 09:13:15 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 09:13:15 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1dd3a4c4-ba47-419f-88a7-3a23e3b00147'' moved to trashcan
Jan 21 09:13:15 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:13:15 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 09:13:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s wr, 0 op/s
Jan 21 09:13:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s wr, 0 op/s
Jan 21 09:13:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 09:13:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:13:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:13:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:21 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/b0d2c918-ebda-4af3-87c7-5d6e78fc290b'.
Jan 21 09:13:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp'
Jan 21 09:13:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp' to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta'
Jan 21 09:13:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:13:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "format": "json"}]: dispatch
Jan 21 09:13:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:13:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:13:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:13:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:13:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 8.6 KiB/s wr, 1 op/s
Jan 21 09:13:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:13:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2798542946' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:13:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:13:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2798542946' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:13:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.1 KiB/s wr, 1 op/s
Jan 21 09:13:24 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501", "format": "json"}]: dispatch
Jan 21 09:13:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:13:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:13:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Jan 21 09:13:26 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501", "target_sub_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 09:13:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, target_sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 09:13:27 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 21 09:13:27 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:27.734450) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 09:13:27 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 21 09:13:27 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004807734477, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1457, "num_deletes": 255, "total_data_size": 2150262, "memory_usage": 2190096, "flush_reason": "Manual Compaction"}
Jan 21 09:13:27 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 21 09:13:27 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004807863491, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2116965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19672, "largest_seqno": 21128, "table_properties": {"data_size": 2109956, "index_size": 4017, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15534, "raw_average_key_size": 20, "raw_value_size": 2095613, "raw_average_value_size": 2790, "num_data_blocks": 181, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004681, "oldest_key_time": 1769004681, "file_creation_time": 1769004807, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:13:27 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 129138 microseconds, and 5045 cpu microseconds.
Jan 21 09:13:27 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:13:27 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/441def10-d72f-43de-9c5a-cb8d8d24291f'.
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:27.863545) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2116965 bytes OK
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:27.863603) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.070786) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.070837) EVENT_LOG_v1 {"time_micros": 1769004808070829, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.070859) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2143624, prev total WAL file size 2143624, number of live WAL files 2.
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.071620) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2067KB)], [47(7365KB)]
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004808071672, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9659270, "oldest_snapshot_seqno": -1}
Jan 21 09:13:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4464 keys, 7879263 bytes, temperature: kUnknown
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004808577784, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7879263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7848309, "index_size": 18684, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 110692, "raw_average_key_size": 24, "raw_value_size": 7766515, "raw_average_value_size": 1739, "num_data_blocks": 780, "num_entries": 4464, "num_filter_entries": 4464, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:13:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp'
Jan 21 09:13:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp' to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta'
Jan 21 09:13:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] tracking-id e2a84b3d-b747-462a-8827-a0dee34dcf5e for path b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8'
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.578061) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7879263 bytes
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.873523) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 19.1 rd, 15.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.2 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.3) write-amplify(3.7) OK, records in: 4987, records dropped: 523 output_compression: NoCompression
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.873628) EVENT_LOG_v1 {"time_micros": 1769004808873602, "job": 24, "event": "compaction_finished", "compaction_time_micros": 506188, "compaction_time_cpu_micros": 17453, "output_level": 6, "num_output_files": 1, "total_output_size": 7879263, "num_input_records": 4987, "num_output_records": 4464, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004808874273, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004808876141, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.071532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.876197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.876203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.876204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.876206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:13:28 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.876207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp'
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp' to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta'
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, target_sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:13:29 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.041+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.041+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.041+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.041+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.041+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 6fd14f2b-0487-4f6b-a678-d4c00c894fd8)
Jan 21 09:13:29 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.471+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.471+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.471+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.471+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.471+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 6fd14f2b-0487-4f6b-a678-d4c00c894fd8) -- by 0 seconds
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp'
Jan 21 09:13:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp' to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta'
Jan 21 09:13:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:30.040+0000 7fc4f7b4a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:30.040+0000 7fc4f7b4a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:30.040+0000 7fc4f7b4a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:30.040+0000 7fc4f7b4a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:30.040+0000 7fc4f7b4a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.snap/09b94b8f-18fe-41bc-bc29-2dce63cc4501/b0d2c918-ebda-4af3-87c7-5d6e78fc290b' to b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/441def10-d72f-43de-9c5a-cb8d8d24291f'
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp'
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp' to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta'
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] untracking e2a84b3d-b747-462a-8827-a0dee34dcf5e
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 32 KiB/s wr, 4 op/s
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp'
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp' to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta'
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp'
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp' to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta'
Jan 21 09:13:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 6fd14f2b-0487-4f6b-a678-d4c00c894fd8)
Jan 21 09:13:30 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.tnwklj(active, since 28m)
Jan 21 09:13:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Jan 21 09:13:31 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 21 09:13:31 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Jan 21 09:13:31 np0005590528 ceph-mgr[75322]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Jan 21 09:13:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Jan 21 09:13:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7fc5286a2bb0>
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:13:31 np0005590528 ceph-mgr[75322]: [progress INFO root] Writing back 18 completed events
Jan 21 09:13:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 09:13:32 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:13:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s wr, 2 op/s
Jan 21 09:13:32 np0005590528 podman[246943]: 2026-01-21 14:13:32.267224392 +0000 UTC m=+0.024314193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:13:32 np0005590528 podman[246943]: 2026-01-21 14:13:32.476128492 +0000 UTC m=+0.233218273 container create 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:13:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:13:32 np0005590528 systemd[1]: Started libpod-conmon-9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62.scope.
Jan 21 09:13:32 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:13:32 np0005590528 podman[246943]: 2026-01-21 14:13:32.580038095 +0000 UTC m=+0.337127906 container init 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 09:13:32 np0005590528 podman[246943]: 2026-01-21 14:13:32.593243396 +0000 UTC m=+0.350333177 container start 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 09:13:32 np0005590528 podman[246943]: 2026-01-21 14:13:32.597206853 +0000 UTC m=+0.354296634 container attach 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 09:13:32 np0005590528 musing_moser[246959]: 167 167
Jan 21 09:13:32 np0005590528 systemd[1]: libpod-9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62.scope: Deactivated successfully.
Jan 21 09:13:32 np0005590528 conmon[246959]: conmon 9320d1c1846641279fc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62.scope/container/memory.events
Jan 21 09:13:32 np0005590528 podman[246943]: 2026-01-21 14:13:32.60203875 +0000 UTC m=+0.359128541 container died 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 21 09:13:32 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3a4c05cec54d077ee00350d1d61662e711fb37d050054c0c0ac0fea9992d4c51-merged.mount: Deactivated successfully.
Jan 21 09:13:32 np0005590528 podman[246943]: 2026-01-21 14:13:32.699818533 +0000 UTC m=+0.456908314 container remove 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:13:32 np0005590528 systemd[1]: libpod-conmon-9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62.scope: Deactivated successfully.
Jan 21 09:13:32 np0005590528 podman[246982]: 2026-01-21 14:13:32.85280648 +0000 UTC m=+0.038223072 container create b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 09:13:32 np0005590528 systemd[1]: Started libpod-conmon-b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8.scope.
Jan 21 09:13:32 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:13:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:32 np0005590528 podman[246982]: 2026-01-21 14:13:32.836084273 +0000 UTC m=+0.021500885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:13:32 np0005590528 podman[246982]: 2026-01-21 14:13:32.932957724 +0000 UTC m=+0.118374366 container init b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 09:13:32 np0005590528 podman[246982]: 2026-01-21 14:13:32.947380315 +0000 UTC m=+0.132796917 container start b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:13:32 np0005590528 podman[246982]: 2026-01-21 14:13:32.951890835 +0000 UTC m=+0.137307467 container attach b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 09:13:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:13:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:13:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:13:33 np0005590528 hopeful_goldstine[246998]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:13:33 np0005590528 hopeful_goldstine[246998]: --> All data devices are unavailable
Jan 21 09:13:33 np0005590528 systemd[1]: libpod-b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8.scope: Deactivated successfully.
Jan 21 09:13:33 np0005590528 conmon[246998]: conmon b997ae0146ab2393b634 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8.scope/container/memory.events
Jan 21 09:13:33 np0005590528 podman[246982]: 2026-01-21 14:13:33.443278168 +0000 UTC m=+0.628694760 container died b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 09:13:33 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f-merged.mount: Deactivated successfully.
Jan 21 09:13:33 np0005590528 podman[246982]: 2026-01-21 14:13:33.5016194 +0000 UTC m=+0.687036032 container remove b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:13:33 np0005590528 systemd[1]: libpod-conmon-b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8.scope: Deactivated successfully.
Jan 21 09:13:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:13:33.903 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:13:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:13:33.905 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:13:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:13:33.905 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:13:33 np0005590528 podman[247093]: 2026-01-21 14:13:33.972433702 +0000 UTC m=+0.043096281 container create d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:13:34 np0005590528 systemd[1]: Started libpod-conmon-d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962.scope.
Jan 21 09:13:34 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:13:34 np0005590528 podman[247093]: 2026-01-21 14:13:33.952853545 +0000 UTC m=+0.023516114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:13:34 np0005590528 podman[247093]: 2026-01-21 14:13:34.053764564 +0000 UTC m=+0.124427153 container init d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:13:34 np0005590528 podman[247093]: 2026-01-21 14:13:34.063461461 +0000 UTC m=+0.134124030 container start d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:13:34 np0005590528 podman[247093]: 2026-01-21 14:13:34.067754065 +0000 UTC m=+0.138416634 container attach d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 09:13:34 np0005590528 systemd[1]: libpod-d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962.scope: Deactivated successfully.
Jan 21 09:13:34 np0005590528 upbeat_feistel[247109]: 167 167
Jan 21 09:13:34 np0005590528 conmon[247109]: conmon d881508b713926b41115 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962.scope/container/memory.events
Jan 21 09:13:34 np0005590528 podman[247093]: 2026-01-21 14:13:34.069421806 +0000 UTC m=+0.140084385 container died d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 09:13:34 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3812bbefbffccfd575c551838b9b5382f1e1a952f2131a54892b139338e44904-merged.mount: Deactivated successfully.
Jan 21 09:13:34 np0005590528 podman[247093]: 2026-01-21 14:13:34.109551544 +0000 UTC m=+0.180214113 container remove d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:13:34 np0005590528 systemd[1]: libpod-conmon-d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962.scope: Deactivated successfully.
Jan 21 09:13:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 24 KiB/s wr, 3 op/s
Jan 21 09:13:34 np0005590528 podman[247136]: 2026-01-21 14:13:34.352149895 +0000 UTC m=+0.071070522 container create 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 09:13:34 np0005590528 systemd[1]: Started libpod-conmon-3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31.scope.
Jan 21 09:13:34 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:13:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672a17c45a4369de27e47f8ee10473a299bbfac1c754da2de4027e63b79db30f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672a17c45a4369de27e47f8ee10473a299bbfac1c754da2de4027e63b79db30f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672a17c45a4369de27e47f8ee10473a299bbfac1c754da2de4027e63b79db30f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672a17c45a4369de27e47f8ee10473a299bbfac1c754da2de4027e63b79db30f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:34 np0005590528 podman[247136]: 2026-01-21 14:13:34.322053202 +0000 UTC m=+0.040973929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:13:34 np0005590528 podman[247136]: 2026-01-21 14:13:34.42378357 +0000 UTC m=+0.142704227 container init 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 09:13:34 np0005590528 podman[247136]: 2026-01-21 14:13:34.431927319 +0000 UTC m=+0.150847956 container start 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 09:13:34 np0005590528 podman[247136]: 2026-01-21 14:13:34.435875865 +0000 UTC m=+0.154796512 container attach 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:13:34 np0005590528 podman[247153]: 2026-01-21 14:13:34.448311518 +0000 UTC m=+0.061088420 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 21 09:13:34 np0005590528 podman[247150]: 2026-01-21 14:13:34.490345353 +0000 UTC m=+0.098725778 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]: {
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:    "0": [
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:        {
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "devices": [
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "/dev/loop3"
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            ],
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_name": "ceph_lv0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_size": "21470642176",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "name": "ceph_lv0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "tags": {
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.cluster_name": "ceph",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.crush_device_class": "",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.encrypted": "0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.objectstore": "bluestore",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.osd_id": "0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.type": "block",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.vdo": "0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.with_tpm": "0"
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            },
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "type": "block",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "vg_name": "ceph_vg0"
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:        }
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:    ],
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:    "1": [
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:        {
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "devices": [
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "/dev/loop4"
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            ],
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_name": "ceph_lv1",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_size": "21470642176",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "name": "ceph_lv1",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "tags": {
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.cluster_name": "ceph",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.crush_device_class": "",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.encrypted": "0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.objectstore": "bluestore",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.osd_id": "1",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.type": "block",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.vdo": "0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.with_tpm": "0"
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            },
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "type": "block",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "vg_name": "ceph_vg1"
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:        }
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:    ],
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:    "2": [
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:        {
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "devices": [
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "/dev/loop5"
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            ],
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_name": "ceph_lv2",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_size": "21470642176",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "name": "ceph_lv2",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "tags": {
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.cluster_name": "ceph",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.crush_device_class": "",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.encrypted": "0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.objectstore": "bluestore",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.osd_id": "2",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.type": "block",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.vdo": "0",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:                "ceph.with_tpm": "0"
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            },
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "type": "block",
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:            "vg_name": "ceph_vg2"
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:        }
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]:    ]
Jan 21 09:13:34 np0005590528 adoring_bartik[247154]: }
Jan 21 09:13:34 np0005590528 systemd[1]: libpod-3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31.scope: Deactivated successfully.
Jan 21 09:13:34 np0005590528 conmon[247154]: conmon 3f6dd928f1072c74bcc1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31.scope/container/memory.events
Jan 21 09:13:34 np0005590528 podman[247136]: 2026-01-21 14:13:34.779285003 +0000 UTC m=+0.498205640 container died 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:13:34 np0005590528 systemd[1]: var-lib-containers-storage-overlay-672a17c45a4369de27e47f8ee10473a299bbfac1c754da2de4027e63b79db30f-merged.mount: Deactivated successfully.
Jan 21 09:13:34 np0005590528 podman[247136]: 2026-01-21 14:13:34.824834953 +0000 UTC m=+0.543755590 container remove 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 09:13:34 np0005590528 systemd[1]: libpod-conmon-3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31.scope: Deactivated successfully.
Jan 21 09:13:35 np0005590528 podman[247280]: 2026-01-21 14:13:35.273844194 +0000 UTC m=+0.042777343 container create 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 09:13:35 np0005590528 systemd[1]: Started libpod-conmon-545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96.scope.
Jan 21 09:13:35 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:13:35 np0005590528 podman[247280]: 2026-01-21 14:13:35.254143383 +0000 UTC m=+0.023076522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:13:35 np0005590528 podman[247280]: 2026-01-21 14:13:35.355003322 +0000 UTC m=+0.123936501 container init 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:13:35 np0005590528 podman[247280]: 2026-01-21 14:13:35.360980807 +0000 UTC m=+0.129913926 container start 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:13:35 np0005590528 podman[247280]: 2026-01-21 14:13:35.365686522 +0000 UTC m=+0.134619641 container attach 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:13:35 np0005590528 trusting_haslett[247296]: 167 167
Jan 21 09:13:35 np0005590528 systemd[1]: libpod-545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96.scope: Deactivated successfully.
Jan 21 09:13:35 np0005590528 conmon[247296]: conmon 545c8aa0c30b3db666f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96.scope/container/memory.events
Jan 21 09:13:35 np0005590528 podman[247280]: 2026-01-21 14:13:35.368677304 +0000 UTC m=+0.137610443 container died 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 21 09:13:35 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b4a2de0755e778fc10d9fcf737c017df97e1a1972b92a6496ea43cd0c145310b-merged.mount: Deactivated successfully.
Jan 21 09:13:35 np0005590528 podman[247280]: 2026-01-21 14:13:35.410971955 +0000 UTC m=+0.179905104 container remove 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 09:13:35 np0005590528 systemd[1]: libpod-conmon-545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96.scope: Deactivated successfully.
Jan 21 09:13:35 np0005590528 podman[247319]: 2026-01-21 14:13:35.585239272 +0000 UTC m=+0.051827514 container create 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:13:35 np0005590528 systemd[1]: Started libpod-conmon-11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84.scope.
Jan 21 09:13:35 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:13:35 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cdf10419b6cf904a40df76dfc002c89d54339e62e89807d8f091f33168b39f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:35 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cdf10419b6cf904a40df76dfc002c89d54339e62e89807d8f091f33168b39f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:35 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cdf10419b6cf904a40df76dfc002c89d54339e62e89807d8f091f33168b39f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:35 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cdf10419b6cf904a40df76dfc002c89d54339e62e89807d8f091f33168b39f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:13:35 np0005590528 podman[247319]: 2026-01-21 14:13:35.561697248 +0000 UTC m=+0.028285550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:13:35 np0005590528 podman[247319]: 2026-01-21 14:13:35.65988472 +0000 UTC m=+0.126472982 container init 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 09:13:35 np0005590528 podman[247319]: 2026-01-21 14:13:35.670907609 +0000 UTC m=+0.137495851 container start 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 09:13:35 np0005590528 podman[247319]: 2026-01-21 14:13:35.674401274 +0000 UTC m=+0.140989706 container attach 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 21 09:13:35 np0005590528 nova_compute[239261]: 2026-01-21 14:13:35.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 37 KiB/s wr, 6 op/s
Jan 21 09:13:36 np0005590528 lvm[247414]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:13:36 np0005590528 lvm[247414]: VG ceph_vg1 finished
Jan 21 09:13:36 np0005590528 lvm[247413]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:13:36 np0005590528 lvm[247413]: VG ceph_vg0 finished
Jan 21 09:13:36 np0005590528 lvm[247416]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:13:36 np0005590528 lvm[247416]: VG ceph_vg2 finished
Jan 21 09:13:36 np0005590528 serene_lumiere[247335]: {}
Jan 21 09:13:36 np0005590528 systemd[1]: libpod-11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84.scope: Deactivated successfully.
Jan 21 09:13:36 np0005590528 systemd[1]: libpod-11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84.scope: Consumed 1.301s CPU time.
Jan 21 09:13:36 np0005590528 podman[247419]: 2026-01-21 14:13:36.542439946 +0000 UTC m=+0.027048021 container died 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 09:13:36 np0005590528 systemd[1]: var-lib-containers-storage-overlay-94cdf10419b6cf904a40df76dfc002c89d54339e62e89807d8f091f33168b39f-merged.mount: Deactivated successfully.
Jan 21 09:13:36 np0005590528 podman[247419]: 2026-01-21 14:13:36.583385173 +0000 UTC m=+0.067993238 container remove 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 09:13:36 np0005590528 systemd[1]: libpod-conmon-11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84.scope: Deactivated successfully.
Jan 21 09:13:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:13:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:13:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:13:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:13:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:13:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:13:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 27 KiB/s wr, 5 op/s
Jan 21 09:13:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:13:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:38 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/c098c762-a168-49c0-8e80-1871b71016e6'.
Jan 21 09:13:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp'
Jan 21 09:13:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp' to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta'
Jan 21 09:13:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "format": "json"}]: dispatch
Jan 21 09:13:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:13:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:13:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:13:39
Jan 21 09:13:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:13:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:13:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups', 'volumes']
Jan 21 09:13:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:13:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 37 KiB/s wr, 7 op/s
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:13:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:13:41 np0005590528 nova_compute[239261]: 2026-01-21 14:13:41.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:41 np0005590528 nova_compute[239261]: 2026-01-21 14:13:41.742 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:13:41 np0005590528 nova_compute[239261]: 2026-01-21 14:13:41.742 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:13:41 np0005590528 nova_compute[239261]: 2026-01-21 14:13:41.755 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/98584c80-dc48-400e-a1ef-b94d26420f34/074547f2-7f4b-4646-af87-b0582d94198e'.
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/98584c80-dc48-400e-a1ef-b94d26420f34/.meta.tmp'
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98584c80-dc48-400e-a1ef-b94d26420f34/.meta.tmp' to config b'/volumes/_nogroup/98584c80-dc48-400e-a1ef-b94d26420f34/.meta'
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "format": "json"}]: dispatch
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 09:13:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 09:13:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:13:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:13:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "f1069ca7-a7f5-4b7d-93eb-79908004053c", "format": "json"}]: dispatch
Jan 21 09:13:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 23 KiB/s wr, 5 op/s
Jan 21 09:13:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:42 np0005590528 nova_compute[239261]: 2026-01-21 14:13:42.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:42 np0005590528 nova_compute[239261]: 2026-01-21 14:13:42.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:42 np0005590528 nova_compute[239261]: 2026-01-21 14:13:42.754 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:13:42 np0005590528 nova_compute[239261]: 2026-01-21 14:13:42.755 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:13:42 np0005590528 nova_compute[239261]: 2026-01-21 14:13:42.755 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:13:42 np0005590528 nova_compute[239261]: 2026-01-21 14:13:42.755 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:13:42 np0005590528 nova_compute[239261]: 2026-01-21 14:13:42.756 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:13:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:13:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616145266' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.310 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.477 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.479 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5016MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.480 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.480 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.729 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.729 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.803 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing inventories for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.880 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating ProviderTree inventory for provider 172aa181-ce4f-4953-808e-b8a26e60249f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.880 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating inventory in ProviderTree for provider 172aa181-ce4f-4953-808e-b8a26e60249f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.903 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing aggregate associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.930 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing trait associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 21 09:13:43 np0005590528 nova_compute[239261]: 2026-01-21 14:13:43.954 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:13:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 23 KiB/s wr, 5 op/s
Jan 21 09:13:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:13:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031350241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:13:44 np0005590528 nova_compute[239261]: 2026-01-21 14:13:44.482 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:13:44 np0005590528 nova_compute[239261]: 2026-01-21 14:13:44.486 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:13:44 np0005590528 nova_compute[239261]: 2026-01-21 14:13:44.563 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:13:44 np0005590528 nova_compute[239261]: 2026-01-21 14:13:44.564 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:13:44 np0005590528 nova_compute[239261]: 2026-01-21 14:13:44.564 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:13:45 np0005590528 nova_compute[239261]: 2026-01-21 14:13:45.565 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:45 np0005590528 nova_compute[239261]: 2026-01-21 14:13:45.565 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:45 np0005590528 nova_compute[239261]: 2026-01-21 14:13:45.566 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:45 np0005590528 nova_compute[239261]: 2026-01-21 14:13:45.566 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:13:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:13:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 09:13:45 np0005590528 nova_compute[239261]: 2026-01-21 14:13:45.720 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:45 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6321f0ab-1903-4b13-841b-f76cfd9c3cac/4132ab36-79f1-480e-a2a7-55e9bc4b49be'.
Jan 21 09:13:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6321f0ab-1903-4b13-841b-f76cfd9c3cac/.meta.tmp'
Jan 21 09:13:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6321f0ab-1903-4b13-841b-f76cfd9c3cac/.meta.tmp' to config b'/volumes/_nogroup/6321f0ab-1903-4b13-841b-f76cfd9c3cac/.meta'
Jan 21 09:13:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 09:13:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "format": "json"}]: dispatch
Jan 21 09:13:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 09:13:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 09:13:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:13:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:13:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 31 KiB/s wr, 6 op/s
Jan 21 09:13:46 np0005590528 nova_compute[239261]: 2026-01-21 14:13:46.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:46 np0005590528 nova_compute[239261]: 2026-01-21 14:13:46.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:46 np0005590528 nova_compute[239261]: 2026-01-21 14:13:46.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 21 09:13:46 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7", "format": "json"}]: dispatch
Jan 21 09:13:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:47 np0005590528 nova_compute[239261]: 2026-01-21 14:13:47.790 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:47 np0005590528 nova_compute[239261]: 2026-01-21 14:13:47.810 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 2 op/s
Jan 21 09:13:48 np0005590528 nova_compute[239261]: 2026-01-21 14:13:48.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:13:48 np0005590528 nova_compute[239261]: 2026-01-21 14:13:48.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 21 09:13:48 np0005590528 nova_compute[239261]: 2026-01-21 14:13:48.740 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 4 op/s
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662142449868504 of space, bias 1.0, pg target 0.19986427349605512 quantized to 32 (current 32)
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.157038733433985e-05 of space, bias 4.0, pg target 0.06188446480120782 quantized to 16 (current 16)
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:13:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:13:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7_dabc2c37-27ba-4a51-926a-eca273cf108c", "force": true, "format": "json"}]: dispatch
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7_dabc2c37-27ba-4a51-926a-eca273cf108c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp'
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp' to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta'
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7_dabc2c37-27ba-4a51-926a-eca273cf108c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7", "force": true, "format": "json"}]: dispatch
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp'
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp' to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta'
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "format": "json"}]: dispatch
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:13:51 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:51.922+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6321f0ab-1903-4b13-841b-f76cfd9c3cac' of type subvolume
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6321f0ab-1903-4b13-841b-f76cfd9c3cac' of type subvolume
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "force": true, "format": "json"}]: dispatch
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6321f0ab-1903-4b13-841b-f76cfd9c3cac'' moved to trashcan
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:13:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 09:13:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Jan 21 09:13:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Jan 21 09:13:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "f1069ca7-a7f5-4b7d-93eb-79908004053c_0d157e9e-6dbe-4da5-910a-8205b873e355", "force": true, "format": "json"}]: dispatch
Jan 21 09:13:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c_0d157e9e-6dbe-4da5-910a-8205b873e355, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp'
Jan 21 09:13:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp' to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta'
Jan 21 09:13:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c_0d157e9e-6dbe-4da5-910a-8205b873e355, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "f1069ca7-a7f5-4b7d-93eb-79908004053c", "force": true, "format": "json"}]: dispatch
Jan 21 09:13:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp'
Jan 21 09:13:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp' to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta'
Jan 21 09:13:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:13:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 5 op/s
Jan 21 09:13:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:13:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 09:13:56 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e92e9c0d-aab4-453c-97fd-2dccbd1b01ca/3947d664-f79c-4559-a25c-0c3cb75d8faa'.
Jan 21 09:13:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e92e9c0d-aab4-453c-97fd-2dccbd1b01ca/.meta.tmp'
Jan 21 09:13:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e92e9c0d-aab4-453c-97fd-2dccbd1b01ca/.meta.tmp' to config b'/volumes/_nogroup/e92e9c0d-aab4-453c-97fd-2dccbd1b01ca/.meta'
Jan 21 09:13:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 09:13:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "format": "json"}]: dispatch
Jan 21 09:13:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 09:13:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 09:13:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:13:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:13:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 21 09:13:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 21 09:13:56 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 21 09:13:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 35 KiB/s wr, 5 op/s
Jan 21 09:13:58 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "format": "json"}]: dispatch
Jan 21 09:13:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:13:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:13:58 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:58.575+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0' of type subvolume
Jan 21 09:13:58 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0' of type subvolume
Jan 21 09:13:58 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "force": true, "format": "json"}]: dispatch
Jan 21 09:13:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:13:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0'' moved to trashcan
Jan 21 09:13:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:13:58 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 09:14:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 56 KiB/s wr, 7 op/s
Jan 21 09:14:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "format": "json"}]: dispatch
Jan 21 09:14:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:00 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e92e9c0d-aab4-453c-97fd-2dccbd1b01ca' of type subvolume
Jan 21 09:14:00 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:00.741+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e92e9c0d-aab4-453c-97fd-2dccbd1b01ca' of type subvolume
Jan 21 09:14:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 09:14:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e92e9c0d-aab4-453c-97fd-2dccbd1b01ca'' moved to trashcan
Jan 21 09:14:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:14:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 09:14:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 21 09:14:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 21 09:14:01 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 21 09:14:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 70 KiB/s wr, 8 op/s
Jan 21 09:14:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 09:14:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:03 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:14:03.674 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:14:03 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:14:03.675 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:14:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 47 KiB/s wr, 5 op/s
Jan 21 09:14:04 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:14:04.677 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:14:05 np0005590528 podman[247504]: 2026-01-21 14:14:05.363745756 +0000 UTC m=+0.070010600 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:14:05 np0005590528 podman[247503]: 2026-01-21 14:14:05.398190047 +0000 UTC m=+0.113208751 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 21 09:14:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 646 B/s rd, 58 KiB/s wr, 7 op/s
Jan 21 09:14:06 np0005590528 nova_compute[239261]: 2026-01-21 14:14:06.393 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:14:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:06 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 09:14:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 09:14:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 09:14:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:14:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:14:06 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:14:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 09:14:07 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8bc5fdaa-02e4-4394-ab57-82acdd89427e/8d8c9d37-265b-4f39-a819-7b0f7f9a7c1c'.
Jan 21 09:14:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8bc5fdaa-02e4-4394-ab57-82acdd89427e/.meta.tmp'
Jan 21 09:14:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bc5fdaa-02e4-4394-ab57-82acdd89427e/.meta.tmp' to config b'/volumes/_nogroup/8bc5fdaa-02e4-4394-ab57-82acdd89427e/.meta'
Jan 21 09:14:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 09:14:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "format": "json"}]: dispatch
Jan 21 09:14:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 09:14:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 09:14:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:14:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:14:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 55 KiB/s wr, 7 op/s
Jan 21 09:14:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "format": "json"}]: dispatch
Jan 21 09:14:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:08 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8bc5fdaa-02e4-4394-ab57-82acdd89427e' of type subvolume
Jan 21 09:14:08 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:08.753+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8bc5fdaa-02e4-4394-ab57-82acdd89427e' of type subvolume
Jan 21 09:14:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 09:14:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8bc5fdaa-02e4-4394-ab57-82acdd89427e'' moved to trashcan
Jan 21 09:14:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:14:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 09:14:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 38 KiB/s wr, 4 op/s
Jan 21 09:14:10 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:14:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:14:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/9d663342-1ee6-4020-b764-46d047183f0b'.
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp'
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp' to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta'
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "format": "json"}]: dispatch
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:14:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:14:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 34 KiB/s wr, 4 op/s
Jan 21 09:14:12 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:14:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ee86be96-97ed-41e6-a8dc-978f7f6c00d9/e221679c-b44b-4c53-ae2f-803a06a1737b'.
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee86be96-97ed-41e6-a8dc-978f7f6c00d9/.meta.tmp'
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee86be96-97ed-41e6-a8dc-978f7f6c00d9/.meta.tmp' to config b'/volumes/_nogroup/ee86be96-97ed-41e6-a8dc-978f7f6c00d9/.meta'
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "format": "json"}]: dispatch
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 09:14:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:14:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8'' moved to trashcan
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:14:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007'.
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 4 op/s
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/.meta.tmp'
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/.meta.tmp' to config b'/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/.meta'
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "format": "json"}]: dispatch
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:14:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "snap_name": "7e50f506-28fa-4be1-8389-acdae8ac8ba3", "format": "json"}]: dispatch
Jan 21 09:14:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:15 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 56 KiB/s wr, 7 op/s
Jan 21 09:14:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501_9e095a23-07cd-445c-a098-afd719bc8021", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501_9e095a23-07cd-445c-a098-afd719bc8021, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp'
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp' to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta'
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501_9e095a23-07cd-445c-a098-afd719bc8021, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 42 KiB/s wr, 5 op/s
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp'
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp' to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta'
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "snap_name": "7e50f506-28fa-4be1-8389-acdae8ac8ba3_550e4856-6eec-4620-8d3b-79929da3bf92", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3_550e4856-6eec-4620-8d3b-79929da3bf92, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp'
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp' to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta'
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3_550e4856-6eec-4620-8d3b-79929da3bf92, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "snap_name": "7e50f506-28fa-4be1-8389-acdae8ac8ba3", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp'
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp' to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta'
Jan 21 09:14:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve49", "tenant_id": "42f926cfde224068a742ef536ed79928", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 09:14:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Jan 21 09:14:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 21 09:14:19 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID eve49 with tenant 42f926cfde224068a742ef536ed79928
Jan 21 09:14:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:14:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:14:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 09:14:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 21 09:14:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:14:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "format": "json"}]: dispatch
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee86be96-97ed-41e6-a8dc-978f7f6c00d9' of type subvolume
Jan 21 09:14:19 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:19.418+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee86be96-97ed-41e6-a8dc-978f7f6c00d9' of type subvolume
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee86be96-97ed-41e6-a8dc-978f7f6c00d9'' moved to trashcan
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:14:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 09:14:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 79 KiB/s wr, 10 op/s
Jan 21 09:14:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:14:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4784 writes, 21K keys, 4784 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4784 writes, 4784 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1427 writes, 6540 keys, 1427 commit groups, 1.0 writes per commit group, ingest: 9.48 MB, 0.02 MB/s#012Interval WAL: 1427 writes, 1427 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     41.5      0.60              0.08        12    0.050       0      0       0.0       0.0#012  L6      1/0    7.51 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     37.3     30.7      2.63              0.24        11    0.239     48K   5782       0.0       0.0#012 Sum      1/0    7.51 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     30.4     32.7      3.23              0.32        23    0.140     48K   5782       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     46.3     46.7      1.00              0.14        10    0.100     24K   2590       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     37.3     30.7      2.63              0.24        11    0.239     48K   5782       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     41.7      0.59              0.08        11    0.054       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.05 MB/s read, 3.2 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562240bf58d0#2 capacity: 304.00 MB usage: 9.34 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.0001 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(572,8.94 MB,2.93957%) FilterBlock(24,142.05 KB,0.0456308%) IndexBlock(24,274.08 KB,0.0880442%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 21 09:14:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "format": "json"}]: dispatch
Jan 21 09:14:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:21 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1442b436-f5bb-47c2-acbf-ac7903d9399e' of type subvolume
Jan 21 09:14:21 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:21.721+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1442b436-f5bb-47c2-acbf-ac7903d9399e' of type subvolume
Jan 21 09:14:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:14:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e'' moved to trashcan
Jan 21 09:14:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:14:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "format": "json"}]: dispatch
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:22 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:22.159+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '398fce8f-70d1-42b2-8ff9-f180fc0fd07c' of type subvolume
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '398fce8f-70d1-42b2-8ff9-f180fc0fd07c' of type subvolume
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c'' moved to trashcan
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve48", "tenant_id": "42f926cfde224068a742ef536ed79928", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 21 09:14:22 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID eve48 with tenant 42f926cfde224068a742ef536ed79928
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 09:14:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 61 KiB/s wr, 8 op/s
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668645748' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:14:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668645748' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:14:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 74 KiB/s wr, 10 op/s
Jan 21 09:14:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 82 KiB/s wr, 12 op/s
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/dd111cee-cd8c-410d-afba-122eba9f97ef/598a97dd-299d-4cce-beaf-351d6cc6c6de'.
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dd111cee-cd8c-410d-afba-122eba9f97ef/.meta.tmp'
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd111cee-cd8c-410d-afba-122eba9f97ef/.meta.tmp' to config b'/volumes/_nogroup/dd111cee-cd8c-410d-afba-122eba9f97ef/.meta'
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "format": "json"}]: dispatch
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 09:14:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:14:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve48", "format": "json"}]: dispatch
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Jan 21 09:14:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 21 09:14:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0)
Jan 21 09:14:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Jan 21 09:14:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve48", "format": "json"}]: dispatch
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007
Jan 21 09:14:26 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007],prefix=session evict} (starting...)
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:14:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 21 09:14:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Jan 21 09:14:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Jan 21 09:14:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 82 KiB/s wr, 12 op/s
Jan 21 09:14:29 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "format": "json"}]: dispatch
Jan 21 09:14:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dd111cee-cd8c-410d-afba-122eba9f97ef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dd111cee-cd8c-410d-afba-122eba9f97ef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:30.000+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dd111cee-cd8c-410d-afba-122eba9f97ef' of type subvolume
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dd111cee-cd8c-410d-afba-122eba9f97ef' of type subvolume
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dd111cee-cd8c-410d-afba-122eba9f97ef'' moved to trashcan
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 68 KiB/s wr, 9 op/s
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve47", "tenant_id": "42f926cfde224068a742ef536ed79928", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 09:14:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Jan 21 09:14:30 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 21 09:14:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID eve47 with tenant 42f926cfde224068a742ef536ed79928
Jan 21 09:14:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:14:30 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:14:30 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:14:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 09:14:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 21 09:14:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 21 09:14:31 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 21 09:14:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 21 09:14:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:14:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:14:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 719 B/s rd, 68 KiB/s wr, 9 op/s
Jan 21 09:14:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:14:33.904 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:14:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:14:33.905 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:14:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:14:33.905 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:14:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 68 KiB/s wr, 9 op/s
Jan 21 09:14:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve47", "format": "json"}]: dispatch
Jan 21 09:14:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Jan 21 09:14:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 21 09:14:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0)
Jan 21 09:14:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Jan 21 09:14:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Jan 21 09:14:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve47", "format": "json"}]: dispatch
Jan 21 09:14:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007
Jan 21 09:14:34 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007],prefix=session evict} (starting...)
Jan 21 09:14:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:14:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 21 09:14:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Jan 21 09:14:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Jan 21 09:14:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:36 np0005590528 podman[247553]: 2026-01-21 14:14:36.332493844 +0000 UTC m=+0.055246673 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 09:14:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 58 KiB/s wr, 7 op/s
Jan 21 09:14:36 np0005590528 podman[247552]: 2026-01-21 14:14:36.35637572 +0000 UTC m=+0.085528444 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 09:14:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "format": "json"}]: dispatch
Jan 21 09:14:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98584c80-dc48-400e-a1ef-b94d26420f34, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98584c80-dc48-400e-a1ef-b94d26420f34, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:37 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:37.277+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98584c80-dc48-400e-a1ef-b94d26420f34' of type subvolume
Jan 21 09:14:37 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98584c80-dc48-400e-a1ef-b94d26420f34' of type subvolume
Jan 21 09:14:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 09:14:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/98584c80-dc48-400e-a1ef-b94d26420f34'' moved to trashcan
Jan 21 09:14:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:14:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:14:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:14:37 np0005590528 podman[247737]: 2026-01-21 14:14:37.901261203 +0000 UTC m=+0.024627065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:14:38 np0005590528 podman[247737]: 2026-01-21 14:14:38.16321237 +0000 UTC m=+0.286578212 container create b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 21 09:14:38 np0005590528 systemd[1]: Started libpod-conmon-b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645.scope.
Jan 21 09:14:38 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:14:38 np0005590528 podman[247737]: 2026-01-21 14:14:38.266325067 +0000 UTC m=+0.389691009 container init b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:14:38 np0005590528 podman[247737]: 2026-01-21 14:14:38.2735369 +0000 UTC m=+0.396902742 container start b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:14:38 np0005590528 podman[247737]: 2026-01-21 14:14:38.277546207 +0000 UTC m=+0.400912099 container attach b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 09:14:38 np0005590528 practical_dhawan[247752]: 167 167
Jan 21 09:14:38 np0005590528 systemd[1]: libpod-b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645.scope: Deactivated successfully.
Jan 21 09:14:38 np0005590528 podman[247737]: 2026-01-21 14:14:38.281166434 +0000 UTC m=+0.404532286 container died b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:14:38 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3cab275c7dfc106251facb1a3c19fe0cc118df6fdc53726744a0f6ae12abd52b-merged.mount: Deactivated successfully.
Jan 21 09:14:38 np0005590528 podman[247737]: 2026-01-21 14:14:38.337347988 +0000 UTC m=+0.460713830 container remove b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:14:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 58 KiB/s wr, 7 op/s
Jan 21 09:14:38 np0005590528 systemd[1]: libpod-conmon-b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645.scope: Deactivated successfully.
Jan 21 09:14:38 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:14:38 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:14:38 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:14:38 np0005590528 podman[247776]: 2026-01-21 14:14:38.555294944 +0000 UTC m=+0.106033938 container create ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 09:14:38 np0005590528 podman[247776]: 2026-01-21 14:14:38.472702153 +0000 UTC m=+0.023441137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:14:38 np0005590528 systemd[1]: Started libpod-conmon-ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea.scope.
Jan 21 09:14:38 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:14:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:38 np0005590528 podman[247776]: 2026-01-21 14:14:38.819731591 +0000 UTC m=+0.370470575 container init ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:14:38 np0005590528 podman[247776]: 2026-01-21 14:14:38.827357935 +0000 UTC m=+0.378096899 container start ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 09:14:38 np0005590528 podman[247776]: 2026-01-21 14:14:38.877641097 +0000 UTC m=+0.428380051 container attach ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:14:39 np0005590528 pensive_swirles[247792]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:14:39 np0005590528 pensive_swirles[247792]: --> All data devices are unavailable
Jan 21 09:14:39 np0005590528 systemd[1]: libpod-ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea.scope: Deactivated successfully.
Jan 21 09:14:39 np0005590528 podman[247776]: 2026-01-21 14:14:39.324990305 +0000 UTC m=+0.875729259 container died ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve49", "format": "json"}]: dispatch
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:39 np0005590528 systemd[1]: var-lib-containers-storage-overlay-9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61-merged.mount: Deactivated successfully.
Jan 21 09:14:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Jan 21 09:14:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 21 09:14:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0)
Jan 21 09:14:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Jan 21 09:14:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Jan 21 09:14:39 np0005590528 podman[247776]: 2026-01-21 14:14:39.481169881 +0000 UTC m=+1.031908825 container remove ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:39 np0005590528 systemd[1]: libpod-conmon-ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea.scope: Deactivated successfully.
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve49", "format": "json"}]: dispatch
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007
Jan 21 09:14:39 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007],prefix=session evict} (starting...)
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 21 09:14:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Jan 21 09:14:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:14:39
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'vms', 'default.rgw.meta', 'volumes']
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "format": "json"}]: dispatch
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0a87ab89-25c8-43d7-9b97-672b44e8c221' of type subvolume
Jan 21 09:14:39 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:39.646+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0a87ab89-25c8-43d7-9b97-672b44e8c221' of type subvolume
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "force": true, "format": "json"}]: dispatch
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221'' moved to trashcan
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:14:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 09:14:39 np0005590528 podman[247888]: 2026-01-21 14:14:39.964160287 +0000 UTC m=+0.058680506 container create 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:14:40 np0005590528 systemd[1]: Started libpod-conmon-0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b.scope.
Jan 21 09:14:40 np0005590528 podman[247888]: 2026-01-21 14:14:39.936818039 +0000 UTC m=+0.031338278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:14:40 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:14:40 np0005590528 podman[247888]: 2026-01-21 14:14:40.279699167 +0000 UTC m=+0.374219436 container init 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 09:14:40 np0005590528 podman[247888]: 2026-01-21 14:14:40.289574855 +0000 UTC m=+0.384095104 container start 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 21 09:14:40 np0005590528 elastic_taussig[247904]: 167 167
Jan 21 09:14:40 np0005590528 systemd[1]: libpod-0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b.scope: Deactivated successfully.
Jan 21 09:14:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 78 KiB/s wr, 10 op/s
Jan 21 09:14:40 np0005590528 podman[247888]: 2026-01-21 14:14:40.451321425 +0000 UTC m=+0.545841824 container attach 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:14:40 np0005590528 podman[247888]: 2026-01-21 14:14:40.452094324 +0000 UTC m=+0.546614583 container died 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50a494490>)]
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc5286a2fa0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50a49bd60>)]
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:14:41 np0005590528 systemd[1]: var-lib-containers-storage-overlay-05f1c6d85f41f68562cb02a7807f33b484b403eb81222e5d3ec7d9df89fc2cb5-merged.mount: Deactivated successfully.
Jan 21 09:14:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:41 np0005590528 podman[247888]: 2026-01-21 14:14:41.190787546 +0000 UTC m=+1.285307765 container remove 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:14:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:14:41 np0005590528 systemd[1]: libpod-conmon-0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b.scope: Deactivated successfully.
Jan 21 09:14:41 np0005590528 podman[247928]: 2026-01-21 14:14:41.332209527 +0000 UTC m=+0.024248646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:14:41 np0005590528 podman[247928]: 2026-01-21 14:14:41.436248706 +0000 UTC m=+0.128287805 container create ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 09:14:41 np0005590528 systemd[1]: Started libpod-conmon-ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e.scope.
Jan 21 09:14:41 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:14:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f78be081f7f1a28cef63708907cd63c42af49bc25a73b8a289e0a19e3e834b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f78be081f7f1a28cef63708907cd63c42af49bc25a73b8a289e0a19e3e834b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f78be081f7f1a28cef63708907cd63c42af49bc25a73b8a289e0a19e3e834b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:41 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f78be081f7f1a28cef63708907cd63c42af49bc25a73b8a289e0a19e3e834b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:41 np0005590528 podman[247928]: 2026-01-21 14:14:41.520640711 +0000 UTC m=+0.212679830 container init ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 09:14:41 np0005590528 podman[247928]: 2026-01-21 14:14:41.527076997 +0000 UTC m=+0.219116096 container start ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 09:14:41 np0005590528 podman[247928]: 2026-01-21 14:14:41.534512256 +0000 UTC m=+0.226551375 container attach ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]: {
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:    "0": [
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:        {
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "devices": [
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "/dev/loop3"
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            ],
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_name": "ceph_lv0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_size": "21470642176",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "name": "ceph_lv0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "tags": {
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.cluster_name": "ceph",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.crush_device_class": "",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.encrypted": "0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.objectstore": "bluestore",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.osd_id": "0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.type": "block",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.vdo": "0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.with_tpm": "0"
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            },
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "type": "block",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "vg_name": "ceph_vg0"
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:        }
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:    ],
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:    "1": [
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:        {
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "devices": [
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "/dev/loop4"
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            ],
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_name": "ceph_lv1",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_size": "21470642176",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "name": "ceph_lv1",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "tags": {
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.cluster_name": "ceph",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.crush_device_class": "",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.encrypted": "0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.objectstore": "bluestore",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.osd_id": "1",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.type": "block",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.vdo": "0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.with_tpm": "0"
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            },
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "type": "block",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "vg_name": "ceph_vg1"
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:        }
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:    ],
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:    "2": [
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:        {
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "devices": [
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "/dev/loop5"
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            ],
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_name": "ceph_lv2",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_size": "21470642176",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "name": "ceph_lv2",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "tags": {
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.cluster_name": "ceph",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.crush_device_class": "",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.encrypted": "0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.objectstore": "bluestore",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.osd_id": "2",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.type": "block",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.vdo": "0",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:                "ceph.with_tpm": "0"
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            },
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "type": "block",
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:            "vg_name": "ceph_vg2"
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:        }
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]:    ]
Jan 21 09:14:41 np0005590528 recursing_dubinsky[247943]: }
Jan 21 09:14:41 np0005590528 systemd[1]: libpod-ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e.scope: Deactivated successfully.
Jan 21 09:14:41 np0005590528 podman[247928]: 2026-01-21 14:14:41.835162046 +0000 UTC m=+0.527201155 container died ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 09:14:41 np0005590528 systemd[1]: var-lib-containers-storage-overlay-8f78be081f7f1a28cef63708907cd63c42af49bc25a73b8a289e0a19e3e834b6-merged.mount: Deactivated successfully.
Jan 21 09:14:41 np0005590528 podman[247928]: 2026-01-21 14:14:41.875346124 +0000 UTC m=+0.567385223 container remove ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:14:41 np0005590528 systemd[1]: libpod-conmon-ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e.scope: Deactivated successfully.
Jan 21 09:14:42 np0005590528 podman[248028]: 2026-01-21 14:14:42.332472278 +0000 UTC m=+0.037363483 container create 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 09:14:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 458 B/s rd, 69 KiB/s wr, 9 op/s
Jan 21 09:14:42 np0005590528 systemd[1]: Started libpod-conmon-7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41.scope.
Jan 21 09:14:42 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:14:42 np0005590528 podman[248028]: 2026-01-21 14:14:42.389711828 +0000 UTC m=+0.094603083 container init 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 09:14:42 np0005590528 podman[248028]: 2026-01-21 14:14:42.395841806 +0000 UTC m=+0.100733021 container start 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 09:14:42 np0005590528 podman[248028]: 2026-01-21 14:14:42.399790251 +0000 UTC m=+0.104681476 container attach 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:14:42 np0005590528 charming_hodgkin[248044]: 167 167
Jan 21 09:14:42 np0005590528 systemd[1]: libpod-7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41.scope: Deactivated successfully.
Jan 21 09:14:42 np0005590528 podman[248028]: 2026-01-21 14:14:42.401167754 +0000 UTC m=+0.106058969 container died 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:14:42 np0005590528 podman[248028]: 2026-01-21 14:14:42.31597796 +0000 UTC m=+0.020869195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:14:42 np0005590528 systemd[1]: var-lib-containers-storage-overlay-85c1ee6c78c373f9ec9f816f4d7d68907c17ce2c3f2d35d5797481fd67501d8d-merged.mount: Deactivated successfully.
Jan 21 09:14:42 np0005590528 podman[248028]: 2026-01-21 14:14:42.439742884 +0000 UTC m=+0.144634089 container remove 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 09:14:42 np0005590528 systemd[1]: libpod-conmon-7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41.scope: Deactivated successfully.
Jan 21 09:14:42 np0005590528 podman[248068]: 2026-01-21 14:14:42.597388656 +0000 UTC m=+0.033827997 container create bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 09:14:42 np0005590528 systemd[1]: Started libpod-conmon-bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a.scope.
Jan 21 09:14:42 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:14:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05aef2b5fdbb13146fb90b6909fb552acfc59c2658df856c2d43f51af618d2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05aef2b5fdbb13146fb90b6909fb552acfc59c2658df856c2d43f51af618d2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05aef2b5fdbb13146fb90b6909fb552acfc59c2658df856c2d43f51af618d2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05aef2b5fdbb13146fb90b6909fb552acfc59c2658df856c2d43f51af618d2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:14:42 np0005590528 podman[248068]: 2026-01-21 14:14:42.583784617 +0000 UTC m=+0.020223978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:14:42 np0005590528 podman[248068]: 2026-01-21 14:14:42.695913291 +0000 UTC m=+0.132352642 container init bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:14:42 np0005590528 podman[248068]: 2026-01-21 14:14:42.702187623 +0000 UTC m=+0.138626964 container start bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 09:14:42 np0005590528 podman[248068]: 2026-01-21 14:14:42.705748378 +0000 UTC m=+0.142187719 container attach bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 09:14:42 np0005590528 nova_compute[239261]: 2026-01-21 14:14:42.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:14:43 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.tnwklj(active, since 30m)
Jan 21 09:14:43 np0005590528 lvm[248163]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:14:43 np0005590528 lvm[248164]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:14:43 np0005590528 lvm[248163]: VG ceph_vg0 finished
Jan 21 09:14:43 np0005590528 lvm[248164]: VG ceph_vg1 finished
Jan 21 09:14:43 np0005590528 lvm[248166]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:14:43 np0005590528 lvm[248166]: VG ceph_vg2 finished
Jan 21 09:14:43 np0005590528 lvm[248168]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:14:43 np0005590528 lvm[248168]: VG ceph_vg2 finished
Jan 21 09:14:43 np0005590528 agitated_meninsky[248085]: {}
Jan 21 09:14:43 np0005590528 systemd[1]: libpod-bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a.scope: Deactivated successfully.
Jan 21 09:14:43 np0005590528 podman[248068]: 2026-01-21 14:14:43.502952583 +0000 UTC m=+0.939391924 container died bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:14:43 np0005590528 systemd[1]: libpod-bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a.scope: Consumed 1.253s CPU time.
Jan 21 09:14:43 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c05aef2b5fdbb13146fb90b6909fb552acfc59c2658df856c2d43f51af618d2f-merged.mount: Deactivated successfully.
Jan 21 09:14:43 np0005590528 podman[248068]: 2026-01-21 14:14:43.547936137 +0000 UTC m=+0.984375478 container remove bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:14:43 np0005590528 systemd[1]: libpod-conmon-bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a.scope: Deactivated successfully.
Jan 21 09:14:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:14:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:14:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:14:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:14:43 np0005590528 nova_compute[239261]: 2026-01-21 14:14:43.726 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:14:43 np0005590528 nova_compute[239261]: 2026-01-21 14:14:43.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:14:43 np0005590528 nova_compute[239261]: 2026-01-21 14:14:43.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:14:43 np0005590528 nova_compute[239261]: 2026-01-21 14:14:43.749 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:14:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 65 KiB/s wr, 8 op/s
Jan 21 09:14:44 np0005590528 nova_compute[239261]: 2026-01-21 14:14:44.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:14:44 np0005590528 nova_compute[239261]: 2026-01-21 14:14:44.767 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:14:44 np0005590528 nova_compute[239261]: 2026-01-21 14:14:44.768 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:14:44 np0005590528 nova_compute[239261]: 2026-01-21 14:14:44.768 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:14:44 np0005590528 nova_compute[239261]: 2026-01-21 14:14:44.768 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:14:44 np0005590528 nova_compute[239261]: 2026-01-21 14:14:44.769 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:14:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:14:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:14:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:14:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979897544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:14:45 np0005590528 nova_compute[239261]: 2026-01-21 14:14:45.655 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.886s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:14:45 np0005590528 nova_compute[239261]: 2026-01-21 14:14:45.795 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:14:45 np0005590528 nova_compute[239261]: 2026-01-21 14:14:45.796 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5008MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:14:45 np0005590528 nova_compute[239261]: 2026-01-21 14:14:45.796 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:14:45 np0005590528 nova_compute[239261]: 2026-01-21 14:14:45.797 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:14:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 73 KiB/s wr, 10 op/s
Jan 21 09:14:46 np0005590528 nova_compute[239261]: 2026-01-21 14:14:46.683 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:14:46 np0005590528 nova_compute[239261]: 2026-01-21 14:14:46.684 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:14:46 np0005590528 nova_compute[239261]: 2026-01-21 14:14:46.702 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:14:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:14:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099711370' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:14:47 np0005590528 nova_compute[239261]: 2026-01-21 14:14:47.313 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:14:47 np0005590528 nova_compute[239261]: 2026-01-21 14:14:47.322 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:14:47 np0005590528 nova_compute[239261]: 2026-01-21 14:14:47.386 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:14:47 np0005590528 nova_compute[239261]: 2026-01-21 14:14:47.387 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:14:47 np0005590528 nova_compute[239261]: 2026-01-21 14:14:47.388 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:14:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 50 KiB/s wr, 7 op/s
Jan 21 09:14:48 np0005590528 nova_compute[239261]: 2026-01-21 14:14:48.389 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:14:48 np0005590528 nova_compute[239261]: 2026-01-21 14:14:48.390 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:14:48 np0005590528 nova_compute[239261]: 2026-01-21 14:14:48.391 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:14:48 np0005590528 nova_compute[239261]: 2026-01-21 14:14:48.391 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:14:48 np0005590528 nova_compute[239261]: 2026-01-21 14:14:48.392 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:14:48 np0005590528 nova_compute[239261]: 2026-01-21 14:14:48.726 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:14:49 np0005590528 nova_compute[239261]: 2026-01-21 14:14:49.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 56 KiB/s wr, 8 op/s
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666210317142837 of space, bias 1.0, pg target 0.1998630951428511 quantized to 32 (current 32)
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00010063828780500965 of space, bias 4.0, pg target 0.12076594536601158 quantized to 16 (current 16)
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:14:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:14:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 3 op/s
Jan 21 09:14:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:14:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:14:52 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478'.
Jan 21 09:14:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/.meta.tmp'
Jan 21 09:14:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/.meta.tmp' to config b'/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/.meta'
Jan 21 09:14:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:14:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "format": "json"}]: dispatch
Jan 21 09:14:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:14:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:14:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:14:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:14:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 3 op/s
Jan 21 09:14:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:14:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 21 KiB/s wr, 3 op/s
Jan 21 09:14:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 1 op/s
Jan 21 09:14:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:14:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 09:14:59 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591'.
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/.meta.tmp'
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/.meta.tmp' to config b'/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/.meta'
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "format": "json"}]: dispatch
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 09:15:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:15:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/74d6c6f5-e0f2-4207-b59c-99c525e6f1c7/a3cff92f-535d-4f55-9c8d-18e6e534e3fb'.
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/74d6c6f5-e0f2-4207-b59c-99c525e6f1c7/.meta.tmp'
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/74d6c6f5-e0f2-4207-b59c-99c525e6f1c7/.meta.tmp' to config b'/volumes/_nogroup/74d6c6f5-e0f2-4207-b59c-99c525e6f1c7/.meta'
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "format": "json"}]: dispatch
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 09:15:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:15:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:15:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 1 op/s
Jan 21 09:15:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:01 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:15:01 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 09:15:01 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5'.
Jan 21 09:15:01 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/.meta.tmp'
Jan 21 09:15:01 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/.meta.tmp' to config b'/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/.meta'
Jan 21 09:15:01 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 09:15:01 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "format": "json"}]: dispatch
Jan 21 09:15:01 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 09:15:01 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 09:15:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:15:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:15:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s wr, 1 op/s
Jan 21 09:15:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:15:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:03 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 09:15:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0b71204a-52c0-4e93-8d46-339e009b5492", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:15:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0b71204a-52c0-4e93-8d46-339e009b5492", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0b71204a-52c0-4e93-8d46-339e009b5492", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Jan 21 09:15:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0b71204a-52c0-4e93-8d46-339e009b5492", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0b71204a-52c0-4e93-8d46-339e009b5492", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "auth_id": "tempest-cephx-id-483669843", "tenant_id": "28d69e0c83c84d03bcfbc4f9c9057023", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume authorize, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, tenant_id:28d69e0c83c84d03bcfbc4f9c9057023, vol_name:cephfs) < ""
Jan 21 09:15:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} v 0)
Jan 21 09:15:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} : dispatch
Jan 21 09:15:05 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-483669843 with tenant 28d69e0c83c84d03bcfbc4f9c9057023
Jan 21 09:15:05 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-483669843", "caps": ["mds", "allow rw path=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_a27e07da-ba92-4d67-aa02-2edb8a28bc44", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:15:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-483669843", "caps": ["mds", "allow rw path=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_a27e07da-ba92-4d67-aa02-2edb8a28bc44", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:05 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-483669843", "caps": ["mds", "allow rw path=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_a27e07da-ba92-4d67-aa02-2edb8a28bc44", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume authorize, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, tenant_id:28d69e0c83c84d03bcfbc4f9c9057023, vol_name:cephfs) < ""
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "format": "json"}]: dispatch
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '74d6c6f5-e0f2-4207-b59c-99c525e6f1c7' of type subvolume
Jan 21 09:15:05 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:05.688+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '74d6c6f5-e0f2-4207-b59c-99c525e6f1c7' of type subvolume
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "force": true, "format": "json"}]: dispatch
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/74d6c6f5-e0f2-4207-b59c-99c525e6f1c7'' moved to trashcan
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:15:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 09:15:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} : dispatch
Jan 21 09:15:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-483669843", "caps": ["mds", "allow rw path=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_a27e07da-ba92-4d67-aa02-2edb8a28bc44", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:05 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-483669843", "caps": ["mds", "allow rw path=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_a27e07da-ba92-4d67-aa02-2edb8a28bc44", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s wr, 4 op/s
Jan 21 09:15:07 np0005590528 podman[248250]: 2026-01-21 14:15:07.33772124 +0000 UTC m=+0.062286384 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 21 09:15:07 np0005590528 podman[248249]: 2026-01-21 14:15:07.387444428 +0000 UTC m=+0.112651967 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 09:15:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s wr, 3 op/s
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "auth_id": "tempest-cephx-id-483669843", "format": "json"}]: dispatch
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume deauthorize, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} v 0)
Jan 21 09:15:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} : dispatch
Jan 21 09:15:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-483669843"} v 0)
Jan 21 09:15:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-483669843"} : dispatch
Jan 21 09:15:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-483669843"}]': finished
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume deauthorize, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "auth_id": "tempest-cephx-id-483669843", "format": "json"}]: dispatch
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume evict, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-483669843, client_metadata.root=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5
Jan 21 09:15:09 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-483669843,client_metadata.root=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5],prefix=session evict} (starting...)
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume evict, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 09:15:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591
Jan 21 09:15:09 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591],prefix=session evict} (starting...)
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "format": "json"}]: dispatch
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:09.698+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a27e07da-ba92-4d67-aa02-2edb8a28bc44' of type subvolume
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a27e07da-ba92-4d67-aa02-2edb8a28bc44' of type subvolume
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "force": true, "format": "json"}]: dispatch
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44'' moved to trashcan
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "format": "json"}]: dispatch
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0b71204a-52c0-4e93-8d46-339e009b5492, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0b71204a-52c0-4e93-8d46-339e009b5492, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:09.771+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0b71204a-52c0-4e93-8d46-339e009b5492' of type subvolume
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0b71204a-52c0-4e93-8d46-339e009b5492' of type subvolume
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "force": true, "format": "json"}]: dispatch
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492'' moved to trashcan
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:15:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 09:15:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} : dispatch
Jan 21 09:15:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-483669843"} : dispatch
Jan 21 09:15:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-483669843"}]': finished
Jan 21 09:15:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 KiB/s wr, 9 op/s
Jan 21 09:15:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:15:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:15:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:15:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:15:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:15:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:15:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 82 KiB/s wr, 8 op/s
Jan 21 09:15:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 82 KiB/s wr, 9 op/s
Jan 21 09:15:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 99 KiB/s wr, 12 op/s
Jan 21 09:15:16 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:15:16.736 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:15:16 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:15:16.737 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:15:16 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:15:16.738 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:15:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:15:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 09:15:18 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b'.
Jan 21 09:15:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/.meta.tmp'
Jan 21 09:15:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/.meta.tmp' to config b'/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/.meta'
Jan 21 09:15:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 09:15:18 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "format": "json"}]: dispatch
Jan 21 09:15:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 09:15:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 09:15:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:15:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:15:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 63 KiB/s wr, 8 op/s
Jan 21 09:15:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 70 KiB/s wr, 9 op/s
Jan 21 09:15:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 09:15:22 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:15:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:22 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 09:15:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c804cc91-0101-4131-a680-b760e9df84f1", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:15:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c804cc91-0101-4131-a680-b760e9df84f1", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c804cc91-0101-4131-a680-b760e9df84f1", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:15:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/273413856' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:15:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:15:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/273413856' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:15:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c804cc91-0101-4131-a680-b760e9df84f1", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c804cc91-0101-4131-a680-b760e9df84f1", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 09:15:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 09:15:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 09:15:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 35 KiB/s wr, 5 op/s
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b
Jan 21 09:15:26 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b],prefix=session evict} (starting...)
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 09:15:26 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:26 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:26 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c804cc91-0101-4131-a680-b760e9df84f1", "format": "json"}]: dispatch
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c804cc91-0101-4131-a680-b760e9df84f1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c804cc91-0101-4131-a680-b760e9df84f1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:26.708+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c804cc91-0101-4131-a680-b760e9df84f1' of type subvolume
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c804cc91-0101-4131-a680-b760e9df84f1' of type subvolume
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "force": true, "format": "json"}]: dispatch
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1'' moved to trashcan
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:15:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 09:15:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:15:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7005 writes, 27K keys, 7005 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7005 writes, 1473 syncs, 4.76 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1137 writes, 2477 keys, 1137 commit groups, 1.0 writes per commit group, ingest: 1.36 MB, 0.00 MB/s#012Interval WAL: 1137 writes, 463 syncs, 2.46 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 09:15:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 2 op/s
Jan 21 09:15:29 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:15:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 09:15:29 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac'.
Jan 21 09:15:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/.meta.tmp'
Jan 21 09:15:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/.meta.tmp' to config b'/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/.meta'
Jan 21 09:15:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 09:15:29 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "format": "json"}]: dispatch
Jan 21 09:15:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 09:15:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 09:15:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:15:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:15:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 48 KiB/s wr, 5 op/s
Jan 21 09:15:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 41 KiB/s wr, 4 op/s
Jan 21 09:15:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:15:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2829 syncs, 3.68 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3207 writes, 10K keys, 3207 commit groups, 1.0 writes per commit group, ingest: 13.35 MB, 0.02 MB/s#012Interval WAL: 3207 writes, 1398 syncs, 2.29 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 09:15:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:15:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:33 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 09:15:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5991b6dd-3598-462c-9b52-78412a23786c", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:15:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5991b6dd-3598-462c-9b52-78412a23786c", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5991b6dd-3598-462c-9b52-78412a23786c", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5991b6dd-3598-462c-9b52-78412a23786c", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5991b6dd-3598-462c-9b52-78412a23786c", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:15:33.905 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:15:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:15:33.906 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:15:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:15:33.907 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:15:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 42 KiB/s wr, 5 op/s
Jan 21 09:15:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 56 KiB/s wr, 6 op/s
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:15:37 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 6974 writes, 26K keys, 6974 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6974 writes, 1420 syncs, 4.91 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1254 writes, 2803 keys, 1254 commit groups, 1.0 writes per commit group, ingest: 1.29 MB, 0.00 MB/s#012Interval WAL: 1254 writes, 494 syncs, 2.54 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 09:15:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 09:15:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac
Jan 21 09:15:37 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac],prefix=session evict} (starting...)
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575'.
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/.meta.tmp'
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/.meta.tmp' to config b'/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/.meta'
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "format": "json"}]: dispatch
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:15:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5991b6dd-3598-462c-9b52-78412a23786c", "format": "json"}]: dispatch
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5991b6dd-3598-462c-9b52-78412a23786c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5991b6dd-3598-462c-9b52-78412a23786c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5991b6dd-3598-462c-9b52-78412a23786c' of type subvolume
Jan 21 09:15:37 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:37.819+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5991b6dd-3598-462c-9b52-78412a23786c' of type subvolume
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "force": true, "format": "json"}]: dispatch
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c'' moved to trashcan
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:15:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 09:15:38 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:38 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:38 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:38 np0005590528 podman[248296]: 2026-01-21 14:15:38.352842453 +0000 UTC m=+0.070940791 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 21 09:15:38 np0005590528 podman[248295]: 2026-01-21 14:15:38.367504337 +0000 UTC m=+0.092001650 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 21 09:15:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 45 KiB/s wr, 5 op/s
Jan 21 09:15:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:15:39
Jan 21 09:15:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:15:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:15:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups', 'images']
Jan 21 09:15:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:15:40 np0005590528 ceph-mgr[75322]: [devicehealth INFO root] Check health
Jan 21 09:15:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 74 KiB/s wr, 8 op/s
Jan 21 09:15:40 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:15:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:40 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 09:15:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:15:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:15:41 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:15:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:41 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 4 op/s
Jan 21 09:15:42 np0005590528 nova_compute[239261]: 2026-01-21 14:15:42.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:15:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 5 op/s
Jan 21 09:15:44 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:15:44 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:44 np0005590528 nova_compute[239261]: 2026-01-21 14:15:44.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:15:44 np0005590528 nova_compute[239261]: 2026-01-21 14:15:44.757 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:15:44 np0005590528 nova_compute[239261]: 2026-01-21 14:15:44.757 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:15:44 np0005590528 nova_compute[239261]: 2026-01-21 14:15:44.757 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:15:44 np0005590528 nova_compute[239261]: 2026-01-21 14:15:44.757 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:15:44 np0005590528 nova_compute[239261]: 2026-01-21 14:15:44.758 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:15:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:15:45 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568423292' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478
Jan 21 09:15:45 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478],prefix=session evict} (starting...)
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:15:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:45 np0005590528 nova_compute[239261]: 2026-01-21 14:15:45.309 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:45 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:45 np0005590528 nova_compute[239261]: 2026-01-21 14:15:45.495 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:15:45 np0005590528 nova_compute[239261]: 2026-01-21 14:15:45.496 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5067MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:15:45 np0005590528 nova_compute[239261]: 2026-01-21 14:15:45.497 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:15:45 np0005590528 nova_compute[239261]: 2026-01-21 14:15:45.497 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:15:45 np0005590528 podman[248578]: 2026-01-21 14:15:45.546598214 +0000 UTC m=+0.039669347 container create 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:15:45 np0005590528 nova_compute[239261]: 2026-01-21 14:15:45.553 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:15:45 np0005590528 nova_compute[239261]: 2026-01-21 14:15:45.555 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:15:45 np0005590528 nova_compute[239261]: 2026-01-21 14:15:45.582 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:15:45 np0005590528 systemd[1]: Started libpod-conmon-720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99.scope.
Jan 21 09:15:45 np0005590528 podman[248578]: 2026-01-21 14:15:45.528342044 +0000 UTC m=+0.021413197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:15:45 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:15:45 np0005590528 podman[248578]: 2026-01-21 14:15:45.645064619 +0000 UTC m=+0.138135772 container init 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 09:15:45 np0005590528 podman[248578]: 2026-01-21 14:15:45.65341178 +0000 UTC m=+0.146482913 container start 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:15:45 np0005590528 podman[248578]: 2026-01-21 14:15:45.657012557 +0000 UTC m=+0.150083690 container attach 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:15:45 np0005590528 frosty_gould[248595]: 167 167
Jan 21 09:15:45 np0005590528 systemd[1]: libpod-720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99.scope: Deactivated successfully.
Jan 21 09:15:45 np0005590528 podman[248578]: 2026-01-21 14:15:45.66129568 +0000 UTC m=+0.154366813 container died 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 09:15:45 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b6f622c086715d30fb2fc856b687a5f5d9d692e0bb479093cd864b8007654a09-merged.mount: Deactivated successfully.
Jan 21 09:15:45 np0005590528 podman[248578]: 2026-01-21 14:15:45.714896842 +0000 UTC m=+0.207967975 container remove 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:15:45 np0005590528 systemd[1]: libpod-conmon-720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99.scope: Deactivated successfully.
Jan 21 09:15:45 np0005590528 podman[248638]: 2026-01-21 14:15:45.879269367 +0000 UTC m=+0.044192357 container create 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:15:45 np0005590528 systemd[1]: Started libpod-conmon-46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee.scope.
Jan 21 09:15:45 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:15:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833533c4d5ba0a2fbe5830267245ec1bcd3bda8e142249f552aeb3a9da30b59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833533c4d5ba0a2fbe5830267245ec1bcd3bda8e142249f552aeb3a9da30b59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833533c4d5ba0a2fbe5830267245ec1bcd3bda8e142249f552aeb3a9da30b59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833533c4d5ba0a2fbe5830267245ec1bcd3bda8e142249f552aeb3a9da30b59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:45 np0005590528 podman[248638]: 2026-01-21 14:15:45.859418588 +0000 UTC m=+0.024341598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:15:45 np0005590528 podman[248638]: 2026-01-21 14:15:45.994245769 +0000 UTC m=+0.159168799 container init 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 09:15:46 np0005590528 podman[248638]: 2026-01-21 14:15:46.00052143 +0000 UTC m=+0.165444480 container start 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 09:15:46 np0005590528 podman[248638]: 2026-01-21 14:15:46.004396524 +0000 UTC m=+0.169319554 container attach 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253504909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:15:46 np0005590528 nova_compute[239261]: 2026-01-21 14:15:46.177 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:15:46 np0005590528 nova_compute[239261]: 2026-01-21 14:15:46.184 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:46 np0005590528 nova_compute[239261]: 2026-01-21 14:15:46.204 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:15:46 np0005590528 nova_compute[239261]: 2026-01-21 14:15:46.205 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:15:46 np0005590528 nova_compute[239261]: 2026-01-21 14:15:46.206 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:15:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 63 KiB/s wr, 8 op/s
Jan 21 09:15:46 np0005590528 loving_curie[248655]: [
Jan 21 09:15:46 np0005590528 loving_curie[248655]:    {
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        "available": false,
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        "being_replaced": false,
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        "ceph_device_lvm": false,
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        "lsm_data": {},
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        "lvs": [],
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        "path": "/dev/sr0",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        "rejected_reasons": [
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "Insufficient space (<5GB)",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "Has a FileSystem"
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        ],
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        "sys_api": {
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "actuators": null,
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "device_nodes": [
Jan 21 09:15:46 np0005590528 loving_curie[248655]:                "sr0"
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            ],
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "devname": "sr0",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "human_readable_size": "482.00 KB",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "id_bus": "ata",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "model": "QEMU DVD-ROM",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "nr_requests": "2",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "parent": "/dev/sr0",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "partitions": {},
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "path": "/dev/sr0",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "removable": "1",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "rev": "2.5+",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "ro": "0",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "rotational": "1",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "sas_address": "",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "sas_device_handle": "",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "scheduler_mode": "mq-deadline",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "sectors": 0,
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "sectorsize": "2048",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "size": 493568.0,
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "support_discard": "2048",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "type": "disk",
Jan 21 09:15:46 np0005590528 loving_curie[248655]:            "vendor": "QEMU"
Jan 21 09:15:46 np0005590528 loving_curie[248655]:        }
Jan 21 09:15:46 np0005590528 loving_curie[248655]:    }
Jan 21 09:15:46 np0005590528 loving_curie[248655]: ]
Jan 21 09:15:46 np0005590528 systemd[1]: libpod-46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee.scope: Deactivated successfully.
Jan 21 09:15:46 np0005590528 podman[248638]: 2026-01-21 14:15:46.592509045 +0000 UTC m=+0.757432035 container died 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 21 09:15:46 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1833533c4d5ba0a2fbe5830267245ec1bcd3bda8e142249f552aeb3a9da30b59-merged.mount: Deactivated successfully.
Jan 21 09:15:46 np0005590528 podman[248638]: 2026-01-21 14:15:46.635738938 +0000 UTC m=+0.800661928 container remove 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 09:15:46 np0005590528 systemd[1]: libpod-conmon-46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee.scope: Deactivated successfully.
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:15:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:15:47 np0005590528 nova_compute[239261]: 2026-01-21 14:15:47.208 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:15:47 np0005590528 nova_compute[239261]: 2026-01-21 14:15:47.208 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:15:47 np0005590528 nova_compute[239261]: 2026-01-21 14:15:47.208 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:15:47 np0005590528 podman[249583]: 2026-01-21 14:15:47.117405243 +0000 UTC m=+0.024130963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:15:47 np0005590528 nova_compute[239261]: 2026-01-21 14:15:47.227 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:15:47 np0005590528 nova_compute[239261]: 2026-01-21 14:15:47.229 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:15:47 np0005590528 nova_compute[239261]: 2026-01-21 14:15:47.229 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:15:47 np0005590528 podman[249583]: 2026-01-21 14:15:47.266483178 +0000 UTC m=+0.173208908 container create d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 09:15:47 np0005590528 systemd[1]: Started libpod-conmon-d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882.scope.
Jan 21 09:15:47 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:15:47 np0005590528 podman[249583]: 2026-01-21 14:15:47.413131524 +0000 UTC m=+0.319857244 container init d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 09:15:47 np0005590528 podman[249583]: 2026-01-21 14:15:47.421438944 +0000 UTC m=+0.328164634 container start d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:15:47 np0005590528 podman[249583]: 2026-01-21 14:15:47.425179025 +0000 UTC m=+0.331904735 container attach d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:15:47 np0005590528 systemd[1]: libpod-d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882.scope: Deactivated successfully.
Jan 21 09:15:47 np0005590528 condescending_boyd[249599]: 167 167
Jan 21 09:15:47 np0005590528 conmon[249599]: conmon d1b687ed415b3d18b261 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882.scope/container/memory.events
Jan 21 09:15:47 np0005590528 podman[249583]: 2026-01-21 14:15:47.429044258 +0000 UTC m=+0.335769968 container died d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:15:47 np0005590528 systemd[1]: var-lib-containers-storage-overlay-7a76bf51af1f2e75f4d3d2f88e393408b344c78a11294a9f73ecaaf47b6798d6-merged.mount: Deactivated successfully.
Jan 21 09:15:47 np0005590528 podman[249583]: 2026-01-21 14:15:47.47266795 +0000 UTC m=+0.379393690 container remove d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:15:47 np0005590528 systemd[1]: libpod-conmon-d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882.scope: Deactivated successfully.
Jan 21 09:15:47 np0005590528 podman[249623]: 2026-01-21 14:15:47.682757666 +0000 UTC m=+0.043578562 container create 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 09:15:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:15:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:15:47 np0005590528 nova_compute[239261]: 2026-01-21 14:15:47.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:15:47 np0005590528 nova_compute[239261]: 2026-01-21 14:15:47.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:15:47 np0005590528 systemd[1]: Started libpod-conmon-219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9.scope.
Jan 21 09:15:47 np0005590528 nova_compute[239261]: 2026-01-21 14:15:47.746 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:15:47 np0005590528 nova_compute[239261]: 2026-01-21 14:15:47.746 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:15:47 np0005590528 podman[249623]: 2026-01-21 14:15:47.661706638 +0000 UTC m=+0.022527554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:15:47 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:15:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:47 np0005590528 podman[249623]: 2026-01-21 14:15:47.781616119 +0000 UTC m=+0.142437035 container init 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 09:15:47 np0005590528 podman[249623]: 2026-01-21 14:15:47.79030786 +0000 UTC m=+0.151128756 container start 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 09:15:47 np0005590528 podman[249623]: 2026-01-21 14:15:47.79366952 +0000 UTC m=+0.154490436 container attach 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:15:48 np0005590528 brave_hopper[249639]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:15:48 np0005590528 brave_hopper[249639]: --> All data devices are unavailable
Jan 21 09:15:48 np0005590528 systemd[1]: libpod-219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9.scope: Deactivated successfully.
Jan 21 09:15:48 np0005590528 conmon[249639]: conmon 219de9d00ffeefe621e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9.scope/container/memory.events
Jan 21 09:15:48 np0005590528 podman[249623]: 2026-01-21 14:15:48.263955591 +0000 UTC m=+0.624776497 container died 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 09:15:48 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3-merged.mount: Deactivated successfully.
Jan 21 09:15:48 np0005590528 podman[249623]: 2026-01-21 14:15:48.311008516 +0000 UTC m=+0.671829422 container remove 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 09:15:48 np0005590528 systemd[1]: libpod-conmon-219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9.scope: Deactivated successfully.
Jan 21 09:15:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 6 op/s
Jan 21 09:15:48 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:15:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:15:48 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:48 np0005590528 nova_compute[239261]: 2026-01-21 14:15:48.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:15:48 np0005590528 podman[249733]: 2026-01-21 14:15:48.797368003 +0000 UTC m=+0.048698625 container create 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 09:15:48 np0005590528 systemd[1]: Started libpod-conmon-06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6.scope.
Jan 21 09:15:48 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:15:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:48 np0005590528 podman[249733]: 2026-01-21 14:15:48.775647979 +0000 UTC m=+0.026978581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:48 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 09:15:48 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:15:48 np0005590528 podman[249733]: 2026-01-21 14:15:48.898697407 +0000 UTC m=+0.150027989 container init 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:15:48 np0005590528 podman[249733]: 2026-01-21 14:15:48.907409477 +0000 UTC m=+0.158740069 container start 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 09:15:48 np0005590528 podman[249733]: 2026-01-21 14:15:48.912810047 +0000 UTC m=+0.164140629 container attach 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:15:48 np0005590528 optimistic_wozniak[249749]: 167 167
Jan 21 09:15:48 np0005590528 systemd[1]: libpod-06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6.scope: Deactivated successfully.
Jan 21 09:15:48 np0005590528 podman[249733]: 2026-01-21 14:15:48.91500155 +0000 UTC m=+0.166332142 container died 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:48 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0c7aa70af6f5ad17b6eac1be8856f5dcea0ba32467ef0b48208228a7c379275e-merged.mount: Deactivated successfully.
Jan 21 09:15:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:48 np0005590528 podman[249733]: 2026-01-21 14:15:48.962156077 +0000 UTC m=+0.213486659 container remove 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:15:48 np0005590528 systemd[1]: libpod-conmon-06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6.scope: Deactivated successfully.
Jan 21 09:15:49 np0005590528 podman[249774]: 2026-01-21 14:15:49.182935501 +0000 UTC m=+0.078463543 container create 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:15:49 np0005590528 systemd[1]: Started libpod-conmon-017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5.scope.
Jan 21 09:15:49 np0005590528 podman[249774]: 2026-01-21 14:15:49.151419181 +0000 UTC m=+0.046947313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:15:49 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:15:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe78d61e6c2b68927ac2a9a9bff7000262d4d1c624ff4540f0125406f17dc876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe78d61e6c2b68927ac2a9a9bff7000262d4d1c624ff4540f0125406f17dc876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe78d61e6c2b68927ac2a9a9bff7000262d4d1c624ff4540f0125406f17dc876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe78d61e6c2b68927ac2a9a9bff7000262d4d1c624ff4540f0125406f17dc876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:49 np0005590528 podman[249774]: 2026-01-21 14:15:49.272828709 +0000 UTC m=+0.168356761 container init 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:15:49 np0005590528 podman[249774]: 2026-01-21 14:15:49.281839076 +0000 UTC m=+0.177367108 container start 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:15:49 np0005590528 podman[249774]: 2026-01-21 14:15:49.285533945 +0000 UTC m=+0.181061997 container attach 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]: {
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:    "0": [
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:        {
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "devices": [
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "/dev/loop3"
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            ],
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_name": "ceph_lv0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_size": "21470642176",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "name": "ceph_lv0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "tags": {
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.cluster_name": "ceph",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.crush_device_class": "",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.encrypted": "0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.objectstore": "bluestore",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.osd_id": "0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.type": "block",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.vdo": "0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.with_tpm": "0"
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            },
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "type": "block",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "vg_name": "ceph_vg0"
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:        }
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:    ],
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:    "1": [
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:        {
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "devices": [
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "/dev/loop4"
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            ],
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_name": "ceph_lv1",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_size": "21470642176",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "name": "ceph_lv1",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "tags": {
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.cluster_name": "ceph",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.crush_device_class": "",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.encrypted": "0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.objectstore": "bluestore",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.osd_id": "1",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.type": "block",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.vdo": "0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.with_tpm": "0"
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            },
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "type": "block",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "vg_name": "ceph_vg1"
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:        }
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:    ],
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:    "2": [
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:        {
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "devices": [
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "/dev/loop5"
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            ],
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_name": "ceph_lv2",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_size": "21470642176",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "name": "ceph_lv2",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "tags": {
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.cluster_name": "ceph",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.crush_device_class": "",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.encrypted": "0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.objectstore": "bluestore",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.osd_id": "2",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.type": "block",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.vdo": "0",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:                "ceph.with_tpm": "0"
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            },
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "type": "block",
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:            "vg_name": "ceph_vg2"
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:        }
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]:    ]
Jan 21 09:15:49 np0005590528 jolly_khorana[249790]: }
Jan 21 09:15:49 np0005590528 systemd[1]: libpod-017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5.scope: Deactivated successfully.
Jan 21 09:15:49 np0005590528 podman[249774]: 2026-01-21 14:15:49.600308405 +0000 UTC m=+0.495836467 container died 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 09:15:49 np0005590528 systemd[1]: var-lib-containers-storage-overlay-fe78d61e6c2b68927ac2a9a9bff7000262d4d1c624ff4540f0125406f17dc876-merged.mount: Deactivated successfully.
Jan 21 09:15:49 np0005590528 podman[249774]: 2026-01-21 14:15:49.656036639 +0000 UTC m=+0.551564671 container remove 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:15:49 np0005590528 systemd[1]: libpod-conmon-017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5.scope: Deactivated successfully.
Jan 21 09:15:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "49d5247d-28e2-437f-92c4-34b98896805f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/49d5247d-28e2-437f-92c4-34b98896805f/742bffda-1701-416d-826e-80b5efe59ac3'.
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/49d5247d-28e2-437f-92c4-34b98896805f/.meta.tmp'
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/49d5247d-28e2-437f-92c4-34b98896805f/.meta.tmp' to config b'/volumes/_nogroup/49d5247d-28e2-437f-92c4-34b98896805f/.meta'
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "49d5247d-28e2-437f-92c4-34b98896805f", "format": "json"}]: dispatch
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 09:15:50 np0005590528 podman[249873]: 2026-01-21 14:15:50.217369585 +0000 UTC m=+0.068735488 container create 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 09:15:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:15:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:15:50 np0005590528 systemd[1]: Started libpod-conmon-3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368.scope.
Jan 21 09:15:50 np0005590528 podman[249873]: 2026-01-21 14:15:50.182956736 +0000 UTC m=+0.034322729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:15:50 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:15:50 np0005590528 podman[249873]: 2026-01-21 14:15:50.290457488 +0000 UTC m=+0.141823411 container init 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 09:15:50 np0005590528 podman[249873]: 2026-01-21 14:15:50.300124291 +0000 UTC m=+0.151490194 container start 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:15:50 np0005590528 podman[249873]: 2026-01-21 14:15:50.303772609 +0000 UTC m=+0.155138502 container attach 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 09:15:50 np0005590528 crazy_swanson[249889]: 167 167
Jan 21 09:15:50 np0005590528 systemd[1]: libpod-3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368.scope: Deactivated successfully.
Jan 21 09:15:50 np0005590528 podman[249873]: 2026-01-21 14:15:50.307068328 +0000 UTC m=+0.158434231 container died 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:15:50 np0005590528 systemd[1]: var-lib-containers-storage-overlay-64e739a2f13a030e0e3313ee8871678e23b0dfa9d3e1d228c06602f171c2f725-merged.mount: Deactivated successfully.
Jan 21 09:15:50 np0005590528 podman[249873]: 2026-01-21 14:15:50.345012633 +0000 UTC m=+0.196378536 container remove 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 09:15:50 np0005590528 systemd[1]: libpod-conmon-3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368.scope: Deactivated successfully.
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 110 KiB/s wr, 14 op/s
Jan 21 09:15:50 np0005590528 podman[249913]: 2026-01-21 14:15:50.539262217 +0000 UTC m=+0.040310353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666210317142837 of space, bias 1.0, pg target 0.1998630951428511 quantized to 32 (current 32)
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00015084340003052672 of space, bias 4.0, pg target 0.18101208003663208 quantized to 16 (current 16)
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:15:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:15:50 np0005590528 podman[249913]: 2026-01-21 14:15:50.823834359 +0000 UTC m=+0.324882515 container create 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:15:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:51 np0005590528 systemd[1]: Started libpod-conmon-55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73.scope.
Jan 21 09:15:51 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:15:51 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ec72c5c1177c1e6f4efb69ce520a89a2ebf71d934abde43829b915c7b3aa35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:51 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ec72c5c1177c1e6f4efb69ce520a89a2ebf71d934abde43829b915c7b3aa35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:51 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ec72c5c1177c1e6f4efb69ce520a89a2ebf71d934abde43829b915c7b3aa35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:51 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ec72c5c1177c1e6f4efb69ce520a89a2ebf71d934abde43829b915c7b3aa35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:15:51 np0005590528 podman[249913]: 2026-01-21 14:15:51.264421424 +0000 UTC m=+0.765469570 container init 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 09:15:51 np0005590528 podman[249913]: 2026-01-21 14:15:51.273313268 +0000 UTC m=+0.774361404 container start 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 09:15:51 np0005590528 podman[249913]: 2026-01-21 14:15:51.302519043 +0000 UTC m=+0.803567359 container attach 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:15:51 np0005590528 nova_compute[239261]: 2026-01-21 14:15:51.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:15:51 np0005590528 lvm[250011]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:15:51 np0005590528 lvm[250008]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:15:51 np0005590528 lvm[250009]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:15:51 np0005590528 lvm[250011]: VG ceph_vg2 finished
Jan 21 09:15:51 np0005590528 lvm[250008]: VG ceph_vg0 finished
Jan 21 09:15:51 np0005590528 lvm[250009]: VG ceph_vg1 finished
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:52 np0005590528 stoic_keldysh[249930]: {}
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:15:52 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:52 np0005590528 systemd[1]: libpod-55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73.scope: Deactivated successfully.
Jan 21 09:15:52 np0005590528 podman[249913]: 2026-01-21 14:15:52.092914843 +0000 UTC m=+1.593962959 container died 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 21 09:15:52 np0005590528 systemd[1]: libpod-55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73.scope: Consumed 1.428s CPU time.
Jan 21 09:15:52 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b3ec72c5c1177c1e6f4efb69ce520a89a2ebf71d934abde43829b915c7b3aa35-merged.mount: Deactivated successfully.
Jan 21 09:15:52 np0005590528 podman[249913]: 2026-01-21 14:15:52.144943047 +0000 UTC m=+1.645991173 container remove 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 09:15:52 np0005590528 systemd[1]: libpod-conmon-55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73.scope: Deactivated successfully.
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478
Jan 21 09:15:52 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478],prefix=session evict} (starting...)
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 81 KiB/s wr, 11 op/s
Jan 21 09:15:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:15:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:53 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:15:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 09:15:53 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/12981b05-fe8f-4fd8-aee2-ae12c976e9f6/40891370-3f9d-46b0-aed4-8e174a61d9cd'.
Jan 21 09:15:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/12981b05-fe8f-4fd8-aee2-ae12c976e9f6/.meta.tmp'
Jan 21 09:15:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/12981b05-fe8f-4fd8-aee2-ae12c976e9f6/.meta.tmp' to config b'/volumes/_nogroup/12981b05-fe8f-4fd8-aee2-ae12c976e9f6/.meta'
Jan 21 09:15:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 09:15:53 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "format": "json"}]: dispatch
Jan 21 09:15:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 09:15:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 09:15:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:15:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:15:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 81 KiB/s wr, 11 op/s
Jan 21 09:15:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:15:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:15:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:15:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:15:55 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:15:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:15:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:15:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:15:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:55 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 09:15:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:15:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:15:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:15:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:15:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:15:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:15:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 117 KiB/s wr, 15 op/s
Jan 21 09:15:57 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "format": "json"}]: dispatch
Jan 21 09:15:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:15:57 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '12981b05-fe8f-4fd8-aee2-ae12c976e9f6' of type subvolume
Jan 21 09:15:57 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:57.310+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '12981b05-fe8f-4fd8-aee2-ae12c976e9f6' of type subvolume
Jan 21 09:15:57 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "force": true, "format": "json"}]: dispatch
Jan 21 09:15:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 09:15:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/12981b05-fe8f-4fd8-aee2-ae12c976e9f6'' moved to trashcan
Jan 21 09:15:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:15:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 09:15:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 98 KiB/s wr, 12 op/s
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:15:59 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:15:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478
Jan 21 09:15:59 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478],prefix=session evict} (starting...)
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:15:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:16:00 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:16:00 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:16:00 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:16:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 152 KiB/s wr, 19 op/s
Jan 21 09:16:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "49d5247d-28e2-437f-92c4-34b98896805f", "format": "json"}]: dispatch
Jan 21 09:16:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:49d5247d-28e2-437f-92c4-34b98896805f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:49d5247d-28e2-437f-92c4-34b98896805f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:00 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '49d5247d-28e2-437f-92c4-34b98896805f' of type subvolume
Jan 21 09:16:00 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:00.811+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '49d5247d-28e2-437f-92c4-34b98896805f' of type subvolume
Jan 21 09:16:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "49d5247d-28e2-437f-92c4-34b98896805f", "force": true, "format": "json"}]: dispatch
Jan 21 09:16:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 09:16:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/49d5247d-28e2-437f-92c4-34b98896805f'' moved to trashcan
Jan 21 09:16:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:16:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 09:16:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 11 op/s
Jan 21 09:16:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:16:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:16:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:16:03 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:16:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:16:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:16:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:16:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:16:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:16:03 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 09:16:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:16:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 09:16:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:16:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:16:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:04 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 12 op/s
Jan 21 09:16:04 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:16:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 09:16:04 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/62ba25ce-388a-4a45-8f6e-7a5833c81f31/f099d9d8-babe-4015-821a-199cd77c6934'.
Jan 21 09:16:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/62ba25ce-388a-4a45-8f6e-7a5833c81f31/.meta.tmp'
Jan 21 09:16:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/62ba25ce-388a-4a45-8f6e-7a5833c81f31/.meta.tmp' to config b'/volumes/_nogroup/62ba25ce-388a-4a45-8f6e-7a5833c81f31/.meta'
Jan 21 09:16:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 09:16:04 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "format": "json"}]: dispatch
Jan 21 09:16:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 09:16:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 09:16:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:16:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:16:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 144 KiB/s wr, 18 op/s
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:16:07 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478
Jan 21 09:16:07 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478],prefix=session evict} (starting...)
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:16:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 09:16:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 09:16:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 107 KiB/s wr, 14 op/s
Jan 21 09:16:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "format": "json"}]: dispatch
Jan 21 09:16:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:08 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:08.591+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '62ba25ce-388a-4a45-8f6e-7a5833c81f31' of type subvolume
Jan 21 09:16:08 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '62ba25ce-388a-4a45-8f6e-7a5833c81f31' of type subvolume
Jan 21 09:16:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "force": true, "format": "json"}]: dispatch
Jan 21 09:16:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 09:16:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/62ba25ce-388a-4a45-8f6e-7a5833c81f31'' moved to trashcan
Jan 21 09:16:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:16:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 09:16:09 np0005590528 podman[250057]: 2026-01-21 14:16:09.34321212 +0000 UTC m=+0.063730037 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 21 09:16:09 np0005590528 podman[250056]: 2026-01-21 14:16:09.380675314 +0000 UTC m=+0.101192301 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 09:16:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 145 KiB/s wr, 18 op/s
Jan 21 09:16:10 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:16:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:16:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:16:10 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:16:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:16:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:16:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:16:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:16:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:16:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:16:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:16:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:16:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:12 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "format": "json"}]: dispatch
Jan 21 09:16:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:12 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:12.109+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cb5ab99b-0e59-4153-829e-95580fc1cdff' of type subvolume
Jan 21 09:16:12 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cb5ab99b-0e59-4153-829e-95580fc1cdff' of type subvolume
Jan 21 09:16:12 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "force": true, "format": "json"}]: dispatch
Jan 21 09:16:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:16:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff'' moved to trashcan
Jan 21 09:16:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:16:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 09:16:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 91 KiB/s wr, 11 op/s
Jan 21 09:16:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 92 KiB/s wr, 12 op/s
Jan 21 09:16:14 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:16:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.767454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004974767493, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2338, "num_deletes": 253, "total_data_size": 3372699, "memory_usage": 3424048, "flush_reason": "Manual Compaction"}
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 21 09:16:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:14 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:16:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:16:14 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:16:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:16:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004974943721, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3291406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21129, "largest_seqno": 23466, "table_properties": {"data_size": 3280903, "index_size": 6485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 24917, "raw_average_key_size": 21, "raw_value_size": 3258565, "raw_average_value_size": 2782, "num_data_blocks": 288, "num_entries": 1171, "num_filter_entries": 1171, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004809, "oldest_key_time": 1769004809, "file_creation_time": 1769004974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 176369 microseconds, and 8542 cpu microseconds.
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.943810) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3291406 bytes OK
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.943845) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.948927) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.948964) EVENT_LOG_v1 {"time_micros": 1769004974948954, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.948992) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3362152, prev total WAL file size 3362152, number of live WAL files 2.
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.951085) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3214KB)], [50(7694KB)]
Jan 21 09:16:14 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004974951157, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11170669, "oldest_snapshot_seqno": -1}
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5106 keys, 9386170 bytes, temperature: kUnknown
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004975088689, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9386170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9349904, "index_size": 22396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 125716, "raw_average_key_size": 24, "raw_value_size": 9255917, "raw_average_value_size": 1812, "num_data_blocks": 937, "num_entries": 5106, "num_filter_entries": 5106, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.089679) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9386170 bytes
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.091579) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.1 rd, 68.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.2) write-amplify(2.9) OK, records in: 5635, records dropped: 529 output_compression: NoCompression
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.091601) EVENT_LOG_v1 {"time_micros": 1769004975091591, "job": 26, "event": "compaction_finished", "compaction_time_micros": 137702, "compaction_time_cpu_micros": 32317, "output_level": 6, "num_output_files": 1, "total_output_size": 9386170, "num_input_records": 5635, "num_output_records": 5106, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004975092448, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004975094233, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.950981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.094344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.094349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.094352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.094354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.094356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:16:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:16:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 115 KiB/s wr, 14 op/s
Jan 21 09:16:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 62 KiB/s wr, 8 op/s
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:16:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:16:19 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:16:19 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:16:19.611 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:16:19 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:16:19.612 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:16:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:16:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:16:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/c1e7aa2f-b3f5-4340-abef-4ba3f2f8f5fc'.
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp'
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp' to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta'
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "format": "json"}]: dispatch
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:16:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:16:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 82 KiB/s wr, 11 op/s
Jan 21 09:16:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 44 KiB/s wr, 6 op/s
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3981460682' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3981460682' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:16:23 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "snap_name": "cd26f1f2-bc6e-4358-affb-44dc2065fabf", "format": "json"}]: dispatch
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:16:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:16:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 45 KiB/s wr, 8 op/s
Jan 21 09:16:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 10 op/s
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:16:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:16:27 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:16:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:16:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "snap_name": "cd26f1f2-bc6e-4358-affb-44dc2065fabf_daf7ebd9-3c4d-4d8d-a4bf-e96f88964a3b", "force": true, "format": "json"}]: dispatch
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf_daf7ebd9-3c4d-4d8d-a4bf-e96f88964a3b, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp'
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp' to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta'
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf_daf7ebd9-3c4d-4d8d-a4bf-e96f88964a3b, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "snap_name": "cd26f1f2-bc6e-4358-affb-44dc2065fabf", "force": true, "format": "json"}]: dispatch
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp'
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp' to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta'
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/96494b3b-24ff-4794-b86c-c27bb64a476f/114899ef-653e-4eb2-b694-cbe1ddce5d94'.
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/96494b3b-24ff-4794-b86c-c27bb64a476f/.meta.tmp'
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/96494b3b-24ff-4794-b86c-c27bb64a476f/.meta.tmp' to config b'/volumes/_nogroup/96494b3b-24ff-4794-b86c-c27bb64a476f/.meta'
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "format": "json"}]: dispatch
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 09:16:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 09:16:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:16:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:16:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:16:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:27 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 54 KiB/s wr, 6 op/s
Jan 21 09:16:29 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:16:29.615 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:16:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 86 KiB/s wr, 12 op/s
Jan 21 09:16:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 21 09:16:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 21 09:16:31 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 21 09:16:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 79 KiB/s wr, 11 op/s
Jan 21 09:16:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:16:33.906 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:16:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:16:33.907 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:16:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:16:33.907 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a444a045-3a18-4422-831d-838f3d178e33", "format": "json"}]: dispatch
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a444a045-3a18-4422-831d-838f3d178e33, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a444a045-3a18-4422-831d-838f3d178e33, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a444a045-3a18-4422-831d-838f3d178e33' of type subvolume
Jan 21 09:16:34 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:34.240+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a444a045-3a18-4422-831d-838f3d178e33' of type subvolume
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "force": true, "format": "json"}]: dispatch
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33'' moved to trashcan
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 84 KiB/s wr, 71 op/s
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:16:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:16:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 09:16:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:16:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:16:34 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:16:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:16:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:16:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:16:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:36 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "format": "json"}]: dispatch
Jan 21 09:16:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:96494b3b-24ff-4794-b86c-c27bb64a476f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:96494b3b-24ff-4794-b86c-c27bb64a476f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:36 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:36.212+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '96494b3b-24ff-4794-b86c-c27bb64a476f' of type subvolume
Jan 21 09:16:36 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '96494b3b-24ff-4794-b86c-c27bb64a476f' of type subvolume
Jan 21 09:16:36 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "force": true, "format": "json"}]: dispatch
Jan 21 09:16:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 09:16:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/96494b3b-24ff-4794-b86c-c27bb64a476f'' moved to trashcan
Jan 21 09:16:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:16:36 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 09:16:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 44 KiB/s wr, 97 op/s
Jan 21 09:16:36 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 09:16:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 44 KiB/s wr, 97 op/s
Jan 21 09:16:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:16:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:16:38 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37'.
Jan 21 09:16:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/.meta.tmp'
Jan 21 09:16:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/.meta.tmp' to config b'/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/.meta'
Jan 21 09:16:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:16:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "format": "json"}]: dispatch
Jan 21 09:16:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:16:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:16:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:16:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:16:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:16:39 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:16:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:16:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5006c4a9-49c2-40a4-8229-4463bddd3634/f85cfe1c-ebd8-429a-98e6-8135bd6e60d3'.
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5006c4a9-49c2-40a4-8229-4463bddd3634/.meta.tmp'
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5006c4a9-49c2-40a4-8229-4463bddd3634/.meta.tmp' to config b'/volumes/_nogroup/5006c4a9-49c2-40a4-8229-4463bddd3634/.meta'
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "format": "json"}]: dispatch
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 09:16:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:16:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:16:39
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'images']
Jan 21 09:16:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:16:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:16:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:40 np0005590528 podman[250103]: 2026-01-21 14:16:40.330889557 +0000 UTC m=+0.051721958 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 21 09:16:40 np0005590528 podman[250102]: 2026-01-21 14:16:40.366204409 +0000 UTC m=+0.089296975 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Jan 21 09:16:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 60 KiB/s wr, 97 op/s
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:16:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 21 09:16:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 21 09:16:41 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:16:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 60 KiB/s wr, 97 op/s
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118'.
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/.meta.tmp'
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/.meta.tmp' to config b'/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/.meta'
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "format": "json"}]: dispatch
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 09:16:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:16:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:16:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:16:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:16:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 09:16:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:16:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:16:43 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "format": "json"}]: dispatch
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5006c4a9-49c2-40a4-8229-4463bddd3634, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5006c4a9-49c2-40a4-8229-4463bddd3634, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:43 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:43.250+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5006c4a9-49c2-40a4-8229-4463bddd3634' of type subvolume
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5006c4a9-49c2-40a4-8229-4463bddd3634' of type subvolume
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "force": true, "format": "json"}]: dispatch
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5006c4a9-49c2-40a4-8229-4463bddd3634'' moved to trashcan
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:16:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 09:16:43 np0005590528 nova_compute[239261]: 2026-01-21 14:16:43.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:16:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:16:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:16:43 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:16:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 116 KiB/s wr, 40 op/s
Jan 21 09:16:45 np0005590528 nova_compute[239261]: 2026-01-21 14:16:45.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:16:45 np0005590528 nova_compute[239261]: 2026-01-21 14:16:45.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:16:45 np0005590528 nova_compute[239261]: 2026-01-21 14:16:45.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:16:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 117 KiB/s wr, 12 op/s
Jan 21 09:16:46 np0005590528 nova_compute[239261]: 2026-01-21 14:16:46.459 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:16:46 np0005590528 nova_compute[239261]: 2026-01-21 14:16:46.460 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:16:46 np0005590528 nova_compute[239261]: 2026-01-21 14:16:46.540 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:16:46 np0005590528 nova_compute[239261]: 2026-01-21 14:16:46.540 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:16:46 np0005590528 nova_compute[239261]: 2026-01-21 14:16:46.541 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:16:46 np0005590528 nova_compute[239261]: 2026-01-21 14:16:46.541 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:16:46 np0005590528 nova_compute[239261]: 2026-01-21 14:16:46.541 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:16:46 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "auth_id": "Joe", "tenant_id": "183d8c03d481485397037ffe17a60995", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:16:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 09:16:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:16:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3175815341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:16:47 np0005590528 nova_compute[239261]: 2026-01-21 14:16:47.237 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.696s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:16:47 np0005590528 nova_compute[239261]: 2026-01-21 14:16:47.413 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:16:47 np0005590528 nova_compute[239261]: 2026-01-21 14:16:47.414 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5081MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:16:47 np0005590528 nova_compute[239261]: 2026-01-21 14:16:47.415 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:16:47 np0005590528 nova_compute[239261]: 2026-01-21 14:16:47.415 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:16:47 np0005590528 nova_compute[239261]: 2026-01-21 14:16:47.617 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:16:47 np0005590528 nova_compute[239261]: 2026-01-21 14:16:47.617 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:16:47 np0005590528 nova_compute[239261]: 2026-01-21 14:16:47.636 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 09:16:48 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID Joe with tenant 183d8c03d481485397037ffe17a60995
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9d63fab0-cc30-4952-b485-806c5f0f78c2", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9d63fab0-cc30-4952-b485-806c5f0f78c2", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9d63fab0-cc30-4952-b485-806c5f0f78c2", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/33814217' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 09:16:48 np0005590528 nova_compute[239261]: 2026-01-21 14:16:48.228 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:16:48 np0005590528 nova_compute[239261]: 2026-01-21 14:16:48.233 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:16:48 np0005590528 nova_compute[239261]: 2026-01-21 14:16:48.317 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:16:48 np0005590528 nova_compute[239261]: 2026-01-21 14:16:48.319 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:16:48 np0005590528 nova_compute[239261]: 2026-01-21 14:16:48.320 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 117 KiB/s wr, 12 op/s
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:16:48 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/c83a9347-47a8-43c7-ae36-697341704e14'.
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp'
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp' to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta'
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "format": "json"}]: dispatch
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:16:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:16:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:16:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 09:16:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9d63fab0-cc30-4952-b485-806c5f0f78c2", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9d63fab0-cc30-4952-b485-806c5f0f78c2", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:16:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:50 np0005590528 nova_compute[239261]: 2026-01-21 14:16:50.315 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:16:50 np0005590528 nova_compute[239261]: 2026-01-21 14:16:50.315 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:16:50 np0005590528 nova_compute[239261]: 2026-01-21 14:16:50.316 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:16:50 np0005590528 nova_compute[239261]: 2026-01-21 14:16:50.316 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:16:50 np0005590528 nova_compute[239261]: 2026-01-21 14:16:50.316 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:16:50 np0005590528 nova_compute[239261]: 2026-01-21 14:16:50.316 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 107 KiB/s wr, 11 op/s
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662290559085449 of space, bias 1.0, pg target 0.19986871677256346 quantized to 32 (current 32)
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00022772385163210003 of space, bias 4.0, pg target 0.27326862195852003 quantized to 16 (current 16)
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:16:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:16:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:51 np0005590528 nova_compute[239261]: 2026-01-21 14:16:51.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 96 KiB/s wr, 10 op/s
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:16:52 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239'.
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/.meta.tmp'
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/.meta.tmp' to config b'/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/.meta'
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "format": "json"}]: dispatch
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37", "format": "json"}]: dispatch
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:16:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:16:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:16:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:16:54 np0005590528 podman[250403]: 2026-01-21 14:16:54.266700228 +0000 UTC m=+0.055061479 container create 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:16:54 np0005590528 systemd[1]: Started libpod-conmon-2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce.scope.
Jan 21 09:16:54 np0005590528 podman[250403]: 2026-01-21 14:16:54.240848675 +0000 UTC m=+0.029209976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:16:54 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:16:54 np0005590528 podman[250403]: 2026-01-21 14:16:54.35678828 +0000 UTC m=+0.145149621 container init 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:16:54 np0005590528 podman[250403]: 2026-01-21 14:16:54.366221707 +0000 UTC m=+0.154582998 container start 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:16:54 np0005590528 podman[250403]: 2026-01-21 14:16:54.370415529 +0000 UTC m=+0.158776820 container attach 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:16:54 np0005590528 pedantic_shannon[250419]: 167 167
Jan 21 09:16:54 np0005590528 systemd[1]: libpod-2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce.scope: Deactivated successfully.
Jan 21 09:16:54 np0005590528 conmon[250419]: conmon 2fedfc9dc4fc17808d3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce.scope/container/memory.events
Jan 21 09:16:54 np0005590528 podman[250403]: 2026-01-21 14:16:54.375111492 +0000 UTC m=+0.163472743 container died 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:16:54 np0005590528 systemd[1]: var-lib-containers-storage-overlay-69da20c2b5622dfda643f7eb72289fc31f21b14d564b8f4b3e70dd5be9d0fa49-merged.mount: Deactivated successfully.
Jan 21 09:16:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 119 KiB/s wr, 11 op/s
Jan 21 09:16:54 np0005590528 podman[250403]: 2026-01-21 14:16:54.419795049 +0000 UTC m=+0.208156300 container remove 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:16:54 np0005590528 systemd[1]: libpod-conmon-2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce.scope: Deactivated successfully.
Jan 21 09:16:54 np0005590528 podman[250442]: 2026-01-21 14:16:54.617265431 +0000 UTC m=+0.042327892 container create 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 09:16:54 np0005590528 systemd[1]: Started libpod-conmon-870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c.scope.
Jan 21 09:16:54 np0005590528 podman[250442]: 2026-01-21 14:16:54.59774048 +0000 UTC m=+0.022802961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:16:54 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:16:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:54 np0005590528 podman[250442]: 2026-01-21 14:16:54.726000853 +0000 UTC m=+0.151063374 container init 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:16:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:16:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:16:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:16:54 np0005590528 podman[250442]: 2026-01-21 14:16:54.735325498 +0000 UTC m=+0.160387939 container start 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 09:16:54 np0005590528 podman[250442]: 2026-01-21 14:16:54.741033436 +0000 UTC m=+0.166096007 container attach 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:16:55 np0005590528 nostalgic_jennings[250458]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:16:55 np0005590528 nostalgic_jennings[250458]: --> All data devices are unavailable
Jan 21 09:16:55 np0005590528 systemd[1]: libpod-870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c.scope: Deactivated successfully.
Jan 21 09:16:55 np0005590528 podman[250442]: 2026-01-21 14:16:55.252151481 +0000 UTC m=+0.677213972 container died 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:16:55 np0005590528 systemd[1]: var-lib-containers-storage-overlay-acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d-merged.mount: Deactivated successfully.
Jan 21 09:16:55 np0005590528 podman[250442]: 2026-01-21 14:16:55.310307413 +0000 UTC m=+0.735369904 container remove 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:16:55 np0005590528 systemd[1]: libpod-conmon-870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c.scope: Deactivated successfully.
Jan 21 09:16:55 np0005590528 podman[250551]: 2026-01-21 14:16:55.821653584 +0000 UTC m=+0.054648888 container create d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:16:55 np0005590528 systemd[1]: Started libpod-conmon-d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4.scope.
Jan 21 09:16:55 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:16:55 np0005590528 podman[250551]: 2026-01-21 14:16:55.896103639 +0000 UTC m=+0.129098963 container init d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:16:55 np0005590528 podman[250551]: 2026-01-21 14:16:55.804546791 +0000 UTC m=+0.037542115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:16:55 np0005590528 podman[250551]: 2026-01-21 14:16:55.90193473 +0000 UTC m=+0.134930064 container start d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 09:16:55 np0005590528 podman[250551]: 2026-01-21 14:16:55.906143331 +0000 UTC m=+0.139138625 container attach d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:16:55 np0005590528 reverent_murdock[250567]: 167 167
Jan 21 09:16:55 np0005590528 systemd[1]: libpod-d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4.scope: Deactivated successfully.
Jan 21 09:16:55 np0005590528 conmon[250567]: conmon d72054ffb6267edbab14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4.scope/container/memory.events
Jan 21 09:16:55 np0005590528 podman[250551]: 2026-01-21 14:16:55.909041451 +0000 UTC m=+0.142036745 container died d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 09:16:55 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2345ebcb45c8c0ef74b24118d52407823dda686da64705f0267c97f837a6c355-merged.mount: Deactivated successfully.
Jan 21 09:16:55 np0005590528 podman[250551]: 2026-01-21 14:16:55.958315669 +0000 UTC m=+0.191310973 container remove d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 09:16:55 np0005590528 systemd[1]: libpod-conmon-d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4.scope: Deactivated successfully.
Jan 21 09:16:56 np0005590528 podman[250590]: 2026-01-21 14:16:56.147674445 +0000 UTC m=+0.061857162 container create 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 09:16:56 np0005590528 systemd[1]: Started libpod-conmon-6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a.scope.
Jan 21 09:16:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:16:56 np0005590528 podman[250590]: 2026-01-21 14:16:56.119466025 +0000 UTC m=+0.033648842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:16:56 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:16:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a026a656b567710b34e1f34bf1f77e4c5f9db291feec74cb03fb9ad3cefa04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a026a656b567710b34e1f34bf1f77e4c5f9db291feec74cb03fb9ad3cefa04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a026a656b567710b34e1f34bf1f77e4c5f9db291feec74cb03fb9ad3cefa04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:56 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a026a656b567710b34e1f34bf1f77e4c5f9db291feec74cb03fb9ad3cefa04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:56 np0005590528 podman[250590]: 2026-01-21 14:16:56.238215079 +0000 UTC m=+0.152397806 container init 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 09:16:56 np0005590528 podman[250590]: 2026-01-21 14:16:56.25318801 +0000 UTC m=+0.167370747 container start 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:16:56 np0005590528 podman[250590]: 2026-01-21 14:16:56.257972925 +0000 UTC m=+0.172155652 container attach 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37", "target_sub_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, target_sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/87b82afe-d56b-4746-a27d-d86b000ae695'.
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp' to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] tracking-id ce770c3c-0f1b-4ca4-a3e0-05207fc9c27f for path b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp' to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, target_sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.350+0000 7fc51b65f640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.350+0000 7fc51b65f640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.350+0000 7fc51b65f640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.350+0000 7fc51b65f640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.350+0000 7fc51b65f640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, af536fd1-8269-495b-9b12-b007bdeeab50)
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.376+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.376+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.376+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.376+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.376+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, af536fd1-8269-495b-9b12-b007bdeeab50) -- by 0 seconds
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp' to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 8 op/s
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:56 np0005590528 sharp_tu[250607]: {
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:    "0": [
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:        {
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "devices": [
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "/dev/loop3"
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            ],
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_name": "ceph_lv0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_size": "21470642176",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "name": "ceph_lv0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "tags": {
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.cluster_name": "ceph",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.crush_device_class": "",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.encrypted": "0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.objectstore": "bluestore",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.osd_id": "0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.type": "block",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.vdo": "0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.with_tpm": "0"
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            },
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "type": "block",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "vg_name": "ceph_vg0"
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:        }
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:    ],
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:    "1": [
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:        {
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "devices": [
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "/dev/loop4"
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            ],
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_name": "ceph_lv1",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_size": "21470642176",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "name": "ceph_lv1",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "tags": {
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.cluster_name": "ceph",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.crush_device_class": "",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.encrypted": "0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.objectstore": "bluestore",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.osd_id": "1",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.type": "block",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.vdo": "0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.with_tpm": "0"
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            },
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "type": "block",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "vg_name": "ceph_vg1"
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:        }
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:    ],
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:    "2": [
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:        {
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "devices": [
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "/dev/loop5"
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            ],
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_name": "ceph_lv2",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_size": "21470642176",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "name": "ceph_lv2",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "tags": {
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.cluster_name": "ceph",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.crush_device_class": "",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.encrypted": "0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.objectstore": "bluestore",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.osd_id": "2",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.type": "block",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.vdo": "0",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:                "ceph.with_tpm": "0"
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            },
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "type": "block",
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:            "vg_name": "ceph_vg2"
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:        }
Jan 21 09:16:56 np0005590528 sharp_tu[250607]:    ]
Jan 21 09:16:56 np0005590528 sharp_tu[250607]: }
Jan 21 09:16:56 np0005590528 systemd[1]: libpod-6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a.scope: Deactivated successfully.
Jan 21 09:16:56 np0005590528 podman[250641]: 2026-01-21 14:16:56.673921495 +0000 UTC m=+0.030524786 container died 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:16:56 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b1a026a656b567710b34e1f34bf1f77e4c5f9db291feec74cb03fb9ad3cefa04-merged.mount: Deactivated successfully.
Jan 21 09:16:56 np0005590528 podman[250641]: 2026-01-21 14:16:56.714693508 +0000 UTC m=+0.071296759 container remove 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:16:56 np0005590528 systemd[1]: libpod-conmon-6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a.scope: Deactivated successfully.
Jan 21 09:16:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:16:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.snap/fb763622-636c-421d-a618-54f14cb70a37/c83a9347-47a8-43c7-ae36-697341704e14' to b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/87b82afe-d56b-4746-a27d-d86b000ae695'
Jan 21 09:16:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:16:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:16:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:56 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.tnwklj(active, since 32m)
Jan 21 09:16:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp' to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] untracking ce770c3c-0f1b-4ca4-a3e0-05207fc9c27f
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp' to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp' to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta'
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, af536fd1-8269-495b-9b12-b007bdeeab50)
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "Joe", "tenant_id": "6b53653c238d45b18082508e065d099c", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 09:16:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Jan 21 09:16:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 09:16:56 np0005590528 ceph-mgr[75322]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Jan 21 09:16:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.923+0000 7fc516655640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Jan 21 09:16:57 np0005590528 podman[250719]: 2026-01-21 14:16:57.201010676 +0000 UTC m=+0.052595420 container create 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:16:57 np0005590528 systemd[1]: Started libpod-conmon-13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2.scope.
Jan 21 09:16:57 np0005590528 podman[250719]: 2026-01-21 14:16:57.174429285 +0000 UTC m=+0.026014109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:16:57 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:16:57 np0005590528 podman[250719]: 2026-01-21 14:16:57.290471833 +0000 UTC m=+0.142056657 container init 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 09:16:57 np0005590528 podman[250719]: 2026-01-21 14:16:57.297601445 +0000 UTC m=+0.149186189 container start 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:16:57 np0005590528 podman[250719]: 2026-01-21 14:16:57.3019669 +0000 UTC m=+0.153551644 container attach 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 09:16:57 np0005590528 practical_chatterjee[250736]: 167 167
Jan 21 09:16:57 np0005590528 systemd[1]: libpod-13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2.scope: Deactivated successfully.
Jan 21 09:16:57 np0005590528 podman[250719]: 2026-01-21 14:16:57.303342823 +0000 UTC m=+0.154927567 container died 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:16:57 np0005590528 systemd[1]: var-lib-containers-storage-overlay-118869ce6c07d6b654e32d2cadce44e2daefa6b3996e1ec88d36b9901d1c62f9-merged.mount: Deactivated successfully.
Jan 21 09:16:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Jan 21 09:16:57 np0005590528 ceph-mgr[75322]: [progress WARNING root] complete: ev mgr-vol-ongoing-clones does not exist
Jan 21 09:16:57 np0005590528 ceph-mgr[75322]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Jan 21 09:16:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Jan 21 09:16:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7fc5286a2bb0>
Jan 21 09:16:57 np0005590528 podman[250719]: 2026-01-21 14:16:57.347945129 +0000 UTC m=+0.199529873 container remove 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:16:57 np0005590528 systemd[1]: libpod-conmon-13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2.scope: Deactivated successfully.
Jan 21 09:16:57 np0005590528 podman[250758]: 2026-01-21 14:16:57.568748883 +0000 UTC m=+0.052488936 container create 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 09:16:57 np0005590528 systemd[1]: Started libpod-conmon-8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a.scope.
Jan 21 09:16:57 np0005590528 podman[250758]: 2026-01-21 14:16:57.546685352 +0000 UTC m=+0.030425425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:16:57 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:16:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441d9976769e0147d7aabb94c45f3ce8e74c511a0b3ec3766c1c50f83dd7ee3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441d9976769e0147d7aabb94c45f3ce8e74c511a0b3ec3766c1c50f83dd7ee3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441d9976769e0147d7aabb94c45f3ce8e74c511a0b3ec3766c1c50f83dd7ee3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:57 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441d9976769e0147d7aabb94c45f3ce8e74c511a0b3ec3766c1c50f83dd7ee3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:16:57 np0005590528 podman[250758]: 2026-01-21 14:16:57.678134501 +0000 UTC m=+0.161874575 container init 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:16:57 np0005590528 podman[250758]: 2026-01-21 14:16:57.68678666 +0000 UTC m=+0.170526713 container start 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:16:57 np0005590528 podman[250758]: 2026-01-21 14:16:57.690186182 +0000 UTC m=+0.173926235 container attach 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:16:57 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:16:57 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:16:57 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 09:16:58 np0005590528 lvm[250854]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:16:58 np0005590528 lvm[250853]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:16:58 np0005590528 lvm[250853]: VG ceph_vg0 finished
Jan 21 09:16:58 np0005590528 lvm[250854]: VG ceph_vg1 finished
Jan 21 09:16:58 np0005590528 lvm[250856]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:16:58 np0005590528 lvm[250856]: VG ceph_vg2 finished
Jan 21 09:16:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 7 op/s
Jan 21 09:16:58 np0005590528 hopeful_ride[250774]: {}
Jan 21 09:16:58 np0005590528 systemd[1]: libpod-8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a.scope: Deactivated successfully.
Jan 21 09:16:58 np0005590528 systemd[1]: libpod-8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a.scope: Consumed 1.372s CPU time.
Jan 21 09:16:58 np0005590528 podman[250758]: 2026-01-21 14:16:58.532214057 +0000 UTC m=+1.015954110 container died 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 09:16:58 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0441d9976769e0147d7aabb94c45f3ce8e74c511a0b3ec3766c1c50f83dd7ee3-merged.mount: Deactivated successfully.
Jan 21 09:16:58 np0005590528 podman[250758]: 2026-01-21 14:16:58.598991426 +0000 UTC m=+1.082731489 container remove 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:16:58 np0005590528 systemd[1]: libpod-conmon-8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a.scope: Deactivated successfully.
Jan 21 09:16:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:16:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:16:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:16:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:16:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:16:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:17:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 108 KiB/s wr, 13 op/s
Jan 21 09:17:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "tempest-cephx-id-102251759", "tenant_id": "6b53653c238d45b18082508e065d099c", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:17:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume authorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 09:17:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} v 0)
Jan 21 09:17:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} : dispatch
Jan 21 09:17:00 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-102251759 with tenant 6b53653c238d45b18082508e065d099c
Jan 21 09:17:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-102251759", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:17:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-102251759", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-102251759", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:00 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} : dispatch
Jan 21 09:17:00 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-102251759", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:00 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-102251759", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume authorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 09:17:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 71 KiB/s wr, 9 op/s
Jan 21 09:17:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 09:17:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 10 op/s
Jan 21 09:17:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 55 KiB/s wr, 9 op/s
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 09:17:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:17:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:17:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:17:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 09:17:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:17:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:17:06 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3'
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239
Jan 21 09:17:07 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239],prefix=session evict} (starting...)
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "tempest-cephx-id-102251759", "format": "json"}]: dispatch
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume deauthorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} v 0)
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} : dispatch
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-102251759"} v 0)
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-102251759"} : dispatch
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-102251759"}]': finished
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume deauthorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "tempest-cephx-id-102251759", "format": "json"}]: dispatch
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume evict, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-102251759, client_metadata.root=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239
Jan 21 09:17:07 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-102251759,client_metadata.root=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239],prefix=session evict} (starting...)
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume evict, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} : dispatch
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-102251759"} : dispatch
Jan 21 09:17:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-102251759"}]': finished
Jan 21 09:17:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:17:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 09:17:08 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5409ffd4-eaac-442c-8587-e47fdf7d7341/d87b1044-560b-4671-b984-5a9e764bf8bb'.
Jan 21 09:17:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5409ffd4-eaac-442c-8587-e47fdf7d7341/.meta.tmp'
Jan 21 09:17:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5409ffd4-eaac-442c-8587-e47fdf7d7341/.meta.tmp' to config b'/volumes/_nogroup/5409ffd4-eaac-442c-8587-e47fdf7d7341/.meta'
Jan 21 09:17:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 09:17:08 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "format": "json"}]: dispatch
Jan 21 09:17:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 09:17:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 55 KiB/s wr, 8 op/s
Jan 21 09:17:08 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 09:17:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:17:08 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:17:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 09:17:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 09:17:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50'' moved to trashcan
Jan 21 09:17:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 09:17:10 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:17:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:17:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:17:10 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:17:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 109 KiB/s wr, 13 op/s
Jan 21 09:17:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:17:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:17:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:17:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:11 np0005590528 podman[250900]: 2026-01-21 14:17:11.383819322 +0000 UTC m=+0.100611517 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 21 09:17:11 np0005590528 podman[250899]: 2026-01-21 14:17:11.391642681 +0000 UTC m=+0.108637491 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 09:17:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Jan 21 09:17:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 09:17:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0)
Jan 21 09:17:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Jan 21 09:17:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118
Jan 21 09:17:11 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118],prefix=session evict} (starting...)
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 09:17:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 68 KiB/s wr, 6 op/s
Jan 21 09:17:12 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "format": "json"}]: dispatch
Jan 21 09:17:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:12 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:12.598+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5409ffd4-eaac-442c-8587-e47fdf7d7341' of type subvolume
Jan 21 09:17:12 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5409ffd4-eaac-442c-8587-e47fdf7d7341' of type subvolume
Jan 21 09:17:12 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 09:17:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5409ffd4-eaac-442c-8587-e47fdf7d7341'' moved to trashcan
Jan 21 09:17:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 09:17:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 09:17:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Jan 21 09:17:12 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Jan 21 09:17:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37_42f3b54a-8269-4b88-8ab9-77648d8a58e3", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb763622-636c-421d-a618-54f14cb70a37_42f3b54a-8269-4b88-8ab9-77648d8a58e3, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:17:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp'
Jan 21 09:17:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp' to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta'
Jan 21 09:17:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb763622-636c-421d-a618-54f14cb70a37_42f3b54a-8269-4b88-8ab9-77648d8a58e3, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:17:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:17:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp'
Jan 21 09:17:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp' to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta'
Jan 21 09:17:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:17:14 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:17:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:14 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 21 09:17:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:17:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:17:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 09:17:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:17:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:17:14 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 09:17:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:14 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:17:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:17:14 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:17:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 125 KiB/s wr, 12 op/s
Jan 21 09:17:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:17:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:17:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:17:15 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "admin", "tenant_id": "183d8c03d481485397037ffe17a60995", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:17:15 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 09:17:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0)
Jan 21 09:17:15 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Jan 21 09:17:15 np0005590528 ceph-mgr[75322]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Jan 21 09:17:15 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 09:17:15 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:15.072+0000 7fc516655640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Jan 21 09:17:15 np0005590528 ceph-mgr[75322]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Jan 21 09:17:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f149fb68-d34d-441e-9d9e-10acfdb751c3/be395b17-77ea-4d1f-a3d5-cf12644172e9'.
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f149fb68-d34d-441e-9d9e-10acfdb751c3/.meta.tmp'
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f149fb68-d34d-441e-9d9e-10acfdb751c3/.meta.tmp' to config b'/volumes/_nogroup/f149fb68-d34d-441e-9d9e-10acfdb751c3/.meta'
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "format": "json"}]: dispatch
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 09:17:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:17:16 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:17:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 114 KiB/s wr, 13 op/s
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "format": "json"}]: dispatch
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:16 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:16.540+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd96d5fd8-0350-40e0-a742-9103d3d18e31' of type subvolume
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd96d5fd8-0350-40e0-a742-9103d3d18e31' of type subvolume
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31'' moved to trashcan
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 09:17:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 21 09:17:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 21 09:17:17 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 21 09:17:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:17:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:17:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:17:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:17:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:17:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:17:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 136 KiB/s wr, 14 op/s
Jan 21 09:17:18 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "david", "tenant_id": "183d8c03d481485397037ffe17a60995", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:17:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 09:17:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Jan 21 09:17:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 09:17:18 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID david with tenant 183d8c03d481485397037ffe17a60995
Jan 21 09:17:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d4d9a3e7-c006-4c96-ab86-0ee694f36366", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:17:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d4d9a3e7-c006-4c96-ab86-0ee694f36366", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d4d9a3e7-c006-4c96-ab86-0ee694f36366", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 09:17:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 09:17:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d4d9a3e7-c006-4c96-ab86-0ee694f36366", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:19 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d4d9a3e7-c006-4c96-ab86-0ee694f36366", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "format": "json"}]: dispatch
Jan 21 09:17:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:19 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:19.926+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f149fb68-d34d-441e-9d9e-10acfdb751c3' of type subvolume
Jan 21 09:17:19 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f149fb68-d34d-441e-9d9e-10acfdb751c3' of type subvolume
Jan 21 09:17:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 09:17:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f149fb68-d34d-441e-9d9e-10acfdb751c3'' moved to trashcan
Jan 21 09:17:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 09:17:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 147 KiB/s wr, 17 op/s
Jan 21 09:17:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 21 09:17:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:17:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 21 09:17:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 21 09:17:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:17:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:17:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 09:17:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:17:21 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:17:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:17:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:17:21 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:17:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:17:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:17:22 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:17:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 98 KiB/s wr, 13 op/s
Jan 21 09:17:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:17:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2877047264' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:17:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:17:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2877047264' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:17:23 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:17:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 09:17:23 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/f8485c14-2515-499e-a291-140bfb971fb6'.
Jan 21 09:17:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/.meta.tmp'
Jan 21 09:17:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/.meta.tmp' to config b'/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/.meta'
Jan 21 09:17:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 09:17:23 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "format": "json"}]: dispatch
Jan 21 09:17:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 09:17:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 09:17:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:17:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:17:24 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:17:24.031 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:17:24 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:17:24.033 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:17:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 141 KiB/s wr, 13 op/s
Jan 21 09:17:24 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:17:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:17:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:17:24 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:17:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:17:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:25 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:17:25 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:25 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:26 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "auth_id": "david", "tenant_id": "6b53653c238d45b18082508e065d099c", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:17:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 09:17:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Jan 21 09:17:26 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 09:17:26 np0005590528 ceph-mgr[75322]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Jan 21 09:17:26 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 09:17:26 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:26.093+0000 7fc516655640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Jan 21 09:17:26 np0005590528 ceph-mgr[75322]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Jan 21 09:17:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 797 B/s rd, 126 KiB/s wr, 14 op/s
Jan 21 09:17:26 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 09:17:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:17:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 113 KiB/s wr, 12 op/s
Jan 21 09:17:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:17:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:17:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 09:17:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:17:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:17:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:17:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:17:28 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:17:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:29 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:17:29.034 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:17:29 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:17:29 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:17:29 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:17:29 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 09:17:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 09:17:30 np0005590528 ceph-mgr[75322]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '61ae05c8-89f3-407b-bbf2-1e843fc0b15c'
Jan 21 09:17:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 09:17:30 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 09:17:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 09:17:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/f8485c14-2515-499e-a291-140bfb971fb6
Jan 21 09:17:30 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/f8485c14-2515-499e-a291-140bfb971fb6],prefix=session evict} (starting...)
Jan 21 09:17:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 09:17:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 83 KiB/s wr, 8 op/s
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.412465) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051412494, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1367, "num_deletes": 261, "total_data_size": 1649275, "memory_usage": 1685616, "flush_reason": "Manual Compaction"}
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051434094, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1619499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23467, "largest_seqno": 24833, "table_properties": {"data_size": 1612921, "index_size": 3525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15843, "raw_average_key_size": 20, "raw_value_size": 1598823, "raw_average_value_size": 2065, "num_data_blocks": 157, "num_entries": 774, "num_filter_entries": 774, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004975, "oldest_key_time": 1769004975, "file_creation_time": 1769005051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 21693 microseconds, and 6397 cpu microseconds.
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.434153) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1619499 bytes OK
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.434176) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.436626) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.436647) EVENT_LOG_v1 {"time_micros": 1769005051436641, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.436668) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1642565, prev total WAL file size 1642565, number of live WAL files 2.
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.437423) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373537' seq:0, type:0; will stop at (end)
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1581KB)], [53(9166KB)]
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051437458, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11005669, "oldest_snapshot_seqno": -1}
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5336 keys, 10902977 bytes, temperature: kUnknown
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051514313, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10902977, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10862957, "index_size": 25574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 132410, "raw_average_key_size": 24, "raw_value_size": 10762830, "raw_average_value_size": 2017, "num_data_blocks": 1071, "num_entries": 5336, "num_filter_entries": 5336, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769005051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.514687) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10902977 bytes
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.516810) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.0 rd, 141.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(13.5) write-amplify(6.7) OK, records in: 5880, records dropped: 544 output_compression: NoCompression
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.516842) EVENT_LOG_v1 {"time_micros": 1769005051516827, "job": 28, "event": "compaction_finished", "compaction_time_micros": 76960, "compaction_time_cpu_micros": 26565, "output_level": 6, "num_output_files": 1, "total_output_size": 10902977, "num_input_records": 5880, "num_output_records": 5336, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051517537, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051521221, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.437355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.521297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.521304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.521308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.521311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:17:31 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.521314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:17:32 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:17:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:17:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:17:32 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:17:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 277 B/s rd, 75 KiB/s wr, 8 op/s
Jan 21 09:17:32 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:17:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:17:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:32 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 09:17:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:17:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:33 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:17:33.908 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:17:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:17:33.909 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:17:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:17:33.909 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:17:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Jan 21 09:17:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 09:17:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0)
Jan 21 09:17:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Jan 21 09:17:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Jan 21 09:17:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:17:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 09:17:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:17:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37
Jan 21 09:17:33 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37],prefix=session evict} (starting...)
Jan 21 09:17:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:17:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:17:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:17:34 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/b078df4b-38f9-4410-ab89-a5c09da3b1cb'.
Jan 21 09:17:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp'
Jan 21 09:17:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp' to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta'
Jan 21 09:17:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:17:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "format": "json"}]: dispatch
Jan 21 09:17:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:17:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:17:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:17:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:17:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 112 KiB/s wr, 10 op/s
Jan 21 09:17:34 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 09:17:34 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Jan 21 09:17:34 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Jan 21 09:17:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 83 KiB/s wr, 9 op/s
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "snap_name": "ed4d40c5-f4bd-45ce-9692-e4bc79bb0372", "format": "json"}]: dispatch
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 82 KiB/s wr, 7 op/s
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:17:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:17:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 09:17:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:17:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:17:38 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "format": "json"}]: dispatch
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:38 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:38.934+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '61ae05c8-89f3-407b-bbf2-1e843fc0b15c' of type subvolume
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '61ae05c8-89f3-407b-bbf2-1e843fc0b15c' of type subvolume
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c'' moved to trashcan
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 09:17:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:17:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:17:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:17:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:17:39
Jan 21 09:17:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:17:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:17:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', 'backups']
Jan 21 09:17:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:17:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 60 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 116 KiB/s wr, 11 op/s
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:17:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:17:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:17:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:17:42 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:17:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:17:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:42 np0005590528 podman[250951]: 2026-01-21 14:17:42.326498209 +0000 UTC m=+0.046873691 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:17:42 np0005590528 podman[250950]: 2026-01-21 14:17:42.352043435 +0000 UTC m=+0.075128592 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 60 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 7 op/s
Jan 21 09:17:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:17:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "format": "json"}]: dispatch
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:42 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:42.509+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3' of type subvolume
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3' of type subvolume
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3'' moved to trashcan
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 09:17:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:17:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 09:17:43 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4934ef18-6cd5-442b-a8fa-227c3608b0bd/c71696ba-936e-4f18-89f5-ca8196c4bb94'.
Jan 21 09:17:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4934ef18-6cd5-442b-a8fa-227c3608b0bd/.meta.tmp'
Jan 21 09:17:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4934ef18-6cd5-442b-a8fa-227c3608b0bd/.meta.tmp' to config b'/volumes/_nogroup/4934ef18-6cd5-442b-a8fa-227c3608b0bd/.meta'
Jan 21 09:17:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 09:17:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "format": "json"}]: dispatch
Jan 21 09:17:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 09:17:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 09:17:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:17:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:17:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 116 KiB/s wr, 10 op/s
Jan 21 09:17:44 np0005590528 nova_compute[239261]: 2026-01-21 14:17:44.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/93827b21-dc3b-4f90-ab80-d532ba42cf82/d69cf44b-7a0d-437d-8b78-55611c70851f'.
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/93827b21-dc3b-4f90-ab80-d532ba42cf82/.meta.tmp'
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/93827b21-dc3b-4f90-ab80-d532ba42cf82/.meta.tmp' to config b'/volumes/_nogroup/93827b21-dc3b-4f90-ab80-d532ba42cf82/.meta'
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "format": "json"}]: dispatch
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 09:17:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:17:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:17:45 np0005590528 nova_compute[239261]: 2026-01-21 14:17:45.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:17:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:17:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 09:17:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:17:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:17:45 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:45 np0005590528 nova_compute[239261]: 2026-01-21 14:17:45.897 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:17:45 np0005590528 nova_compute[239261]: 2026-01-21 14:17:45.897 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:17:45 np0005590528 nova_compute[239261]: 2026-01-21 14:17:45.898 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:17:45 np0005590528 nova_compute[239261]: 2026-01-21 14:17:45.898 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:17:45 np0005590528 nova_compute[239261]: 2026-01-21 14:17:45.899 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:17:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 74 KiB/s wr, 9 op/s
Jan 21 09:17:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:17:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/219510721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:17:46 np0005590528 nova_compute[239261]: 2026-01-21 14:17:46.501 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:17:46 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "format": "json"}]: dispatch
Jan 21 09:17:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:46 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9d63fab0-cc30-4952-b485-806c5f0f78c2' of type subvolume
Jan 21 09:17:46 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:46.535+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9d63fab0-cc30-4952-b485-806c5f0f78c2' of type subvolume
Jan 21 09:17:46 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 09:17:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2'' moved to trashcan
Jan 21 09:17:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 09:17:46 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:17:46 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:17:46 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:17:46 np0005590528 nova_compute[239261]: 2026-01-21 14:17:46.652 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:17:46 np0005590528 nova_compute[239261]: 2026-01-21 14:17:46.653 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5054MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:17:46 np0005590528 nova_compute[239261]: 2026-01-21 14:17:46.653 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:17:46 np0005590528 nova_compute[239261]: 2026-01-21 14:17:46.653 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:17:46 np0005590528 nova_compute[239261]: 2026-01-21 14:17:46.725 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:17:46 np0005590528 nova_compute[239261]: 2026-01-21 14:17:46.725 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:17:46 np0005590528 nova_compute[239261]: 2026-01-21 14:17:46.742 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:17:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:17:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555477584' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:17:47 np0005590528 nova_compute[239261]: 2026-01-21 14:17:47.263 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:17:47 np0005590528 nova_compute[239261]: 2026-01-21 14:17:47.269 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:17:47 np0005590528 nova_compute[239261]: 2026-01-21 14:17:47.406 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:17:47 np0005590528 nova_compute[239261]: 2026-01-21 14:17:47.409 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:17:47 np0005590528 nova_compute[239261]: 2026-01-21 14:17:47.410 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:17:47 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "format": "json"}]: dispatch
Jan 21 09:17:47 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:47 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:47 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4934ef18-6cd5-442b-a8fa-227c3608b0bd' of type subvolume
Jan 21 09:17:47 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:47.505+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4934ef18-6cd5-442b-a8fa-227c3608b0bd' of type subvolume
Jan 21 09:17:47 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:47 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 09:17:47 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4934ef18-6cd5-442b-a8fa-227c3608b0bd'' moved to trashcan
Jan 21 09:17:47 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:47 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 09:17:48 np0005590528 nova_compute[239261]: 2026-01-21 14:17:48.410 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:17:48 np0005590528 nova_compute[239261]: 2026-01-21 14:17:48.411 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:17:48 np0005590528 nova_compute[239261]: 2026-01-21 14:17:48.411 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:17:48 np0005590528 nova_compute[239261]: 2026-01-21 14:17:48.424 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:17:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 74 KiB/s wr, 8 op/s
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a2a02f3a-dc86-4c41-ae4c-20c17fe75226/37873579-fd8d-4e0a-980b-73d1a1678e9b'.
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a2a02f3a-dc86-4c41-ae4c-20c17fe75226/.meta.tmp'
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a2a02f3a-dc86-4c41-ae4c-20c17fe75226/.meta.tmp' to config b'/volumes/_nogroup/a2a02f3a-dc86-4c41-ae4c-20c17fe75226/.meta'
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "format": "json"}]: dispatch
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 09:17:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:17:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:17:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:17:49 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:17:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:17:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:49 np0005590528 nova_compute[239261]: 2026-01-21 14:17:49.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:17:49 np0005590528 nova_compute[239261]: 2026-01-21 14:17:49.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:17:49 np0005590528 nova_compute[239261]: 2026-01-21 14:17:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:17:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:17:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:49 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "admin", "format": "json"}]: dispatch
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Jan 21 09:17:50 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:50.371+0000 7fc516655640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 61 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 140 KiB/s wr, 15 op/s
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "format": "json"}]: dispatch
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4d9a3e7-c006-4c96-ab86-0ee694f36366' of type subvolume
Jan 21 09:17:50 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:50.561+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4d9a3e7-c006-4c96-ab86-0ee694f36366' of type subvolume
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366'' moved to trashcan
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662283883303134 of space, bias 1.0, pg target 0.199868516499094 quantized to 32 (current 32)
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0003203949192705631 of space, bias 4.0, pg target 0.3844739031246757 quantized to 16 (current 16)
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:17:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:17:50 np0005590528 nova_compute[239261]: 2026-01-21 14:17:50.719 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:17:50 np0005590528 nova_compute[239261]: 2026-01-21 14:17:50.893 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:17:50 np0005590528 nova_compute[239261]: 2026-01-21 14:17:50.894 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:17:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:51 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:17:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 09:17:51 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6/1245fb95-20a1-49fb-a04e-48144d861baf'.
Jan 21 09:17:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6/.meta.tmp'
Jan 21 09:17:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6/.meta.tmp' to config b'/volumes/_nogroup/2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6/.meta'
Jan 21 09:17:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 09:17:51 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "format": "json"}]: dispatch
Jan 21 09:17:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 09:17:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 09:17:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:17:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:17:51 np0005590528 nova_compute[239261]: 2026-01-21 14:17:51.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 61 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 107 KiB/s wr, 11 op/s
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "format": "json"}]: dispatch
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a2a02f3a-dc86-4c41-ae4c-20c17fe75226' of type subvolume
Jan 21 09:17:52 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:52.660+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a2a02f3a-dc86-4c41-ae4c-20c17fe75226' of type subvolume
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a2a02f3a-dc86-4c41-ae4c-20c17fe75226'' moved to trashcan
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:17:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:17:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 09:17:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:17:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:17:52 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:17:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:17:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:17:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:17:53 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:17:53 np0005590528 nova_compute[239261]: 2026-01-21 14:17:53.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:17:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 147 KiB/s wr, 14 op/s
Jan 21 09:17:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 107 KiB/s wr, 13 op/s
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:17:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:17:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:17:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:17:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "format": "json"}]: dispatch
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:56 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:56.937+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '93827b21-dc3b-4f90-ab80-d532ba42cf82' of type subvolume
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '93827b21-dc3b-4f90-ab80-d532ba42cf82' of type subvolume
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/93827b21-dc3b-4f90-ab80-d532ba42cf82'' moved to trashcan
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 09:17:57 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "format": "json"}]: dispatch
Jan 21 09:17:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:17:57 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:57.047+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6' of type subvolume
Jan 21 09:17:57 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6' of type subvolume
Jan 21 09:17:57 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "force": true, "format": "json"}]: dispatch
Jan 21 09:17:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 09:17:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6'' moved to trashcan
Jan 21 09:17:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:17:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 09:17:57 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:17:57 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:17:57 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:17:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 107 KiB/s wr, 11 op/s
Jan 21 09:17:59 np0005590528 podman[251138]: 2026-01-21 14:17:59.338494363 +0000 UTC m=+0.062589465 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:17:59 np0005590528 podman[251138]: 2026-01-21 14:17:59.439935647 +0000 UTC m=+0.164030659 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:18:00 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 146 KiB/s wr, 15 op/s
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "aefb1c02-8305-4e9b-9f91-87659561ca53", "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:aefb1c02-8305-4e9b-9f91-87659561ca53, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 21 09:18:00 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:aefb1c02-8305-4e9b-9f91-87659561ca53, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:18:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:18:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:01 np0005590528 podman[251472]: 2026-01-21 14:18:01.580241032 +0000 UTC m=+0.052162721 container create aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 09:18:01 np0005590528 systemd[1]: Started libpod-conmon-aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316.scope.
Jan 21 09:18:01 np0005590528 podman[251472]: 2026-01-21 14:18:01.552708488 +0000 UTC m=+0.024630217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:18:01 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:18:01 np0005590528 podman[251472]: 2026-01-21 14:18:01.679932325 +0000 UTC m=+0.151854034 container init aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:18:01 np0005590528 podman[251472]: 2026-01-21 14:18:01.686886697 +0000 UTC m=+0.158808376 container start aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:18:01 np0005590528 podman[251472]: 2026-01-21 14:18:01.690045641 +0000 UTC m=+0.161967340 container attach aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 21 09:18:01 np0005590528 systemd[1]: libpod-aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316.scope: Deactivated successfully.
Jan 21 09:18:01 np0005590528 stoic_mayer[251488]: 167 167
Jan 21 09:18:01 np0005590528 conmon[251488]: conmon aabd9d317b4927eff1eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316.scope/container/memory.events
Jan 21 09:18:01 np0005590528 podman[251472]: 2026-01-21 14:18:01.694493005 +0000 UTC m=+0.166414694 container died aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:18:01 np0005590528 systemd[1]: var-lib-containers-storage-overlay-4047451b0460fc3ca7d64588d7ad3e57bb1eb6e94e02ca75f3f4410f5b67884e-merged.mount: Deactivated successfully.
Jan 21 09:18:01 np0005590528 podman[251472]: 2026-01-21 14:18:01.737700066 +0000 UTC m=+0.209621745 container remove aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:18:01 np0005590528 systemd[1]: libpod-conmon-aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316.scope: Deactivated successfully.
Jan 21 09:18:01 np0005590528 podman[251510]: 2026-01-21 14:18:01.941680768 +0000 UTC m=+0.047442830 container create 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 09:18:01 np0005590528 systemd[1]: Started libpod-conmon-3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8.scope.
Jan 21 09:18:02 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:18:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:02 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:02 np0005590528 podman[251510]: 2026-01-21 14:18:01.923606376 +0000 UTC m=+0.029368468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:18:02 np0005590528 podman[251510]: 2026-01-21 14:18:02.024014024 +0000 UTC m=+0.129776106 container init 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 09:18:02 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:18:02 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 09:18:02 np0005590528 podman[251510]: 2026-01-21 14:18:02.038909643 +0000 UTC m=+0.144671695 container start 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:18:02 np0005590528 podman[251510]: 2026-01-21 14:18:02.041967855 +0000 UTC m=+0.147729917 container attach 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:18:02 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/03db482a-4a9f-44b3-ba43-fe5ff12e229e/4ada938c-7ccc-47bb-8a7d-06e39fa21b91'.
Jan 21 09:18:02 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/03db482a-4a9f-44b3-ba43-fe5ff12e229e/.meta.tmp'
Jan 21 09:18:02 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/03db482a-4a9f-44b3-ba43-fe5ff12e229e/.meta.tmp' to config b'/volumes/_nogroup/03db482a-4a9f-44b3-ba43-fe5ff12e229e/.meta'
Jan 21 09:18:02 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 09:18:02 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "format": "json"}]: dispatch
Jan 21 09:18:02 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 09:18:02 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 09:18:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:18:02 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:18:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 8 op/s
Jan 21 09:18:02 np0005590528 vibrant_euler[251527]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:18:02 np0005590528 vibrant_euler[251527]: --> All data devices are unavailable
Jan 21 09:18:02 np0005590528 systemd[1]: libpod-3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8.scope: Deactivated successfully.
Jan 21 09:18:02 np0005590528 podman[251510]: 2026-01-21 14:18:02.524929744 +0000 UTC m=+0.630691906 container died 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 09:18:02 np0005590528 systemd[1]: var-lib-containers-storage-overlay-8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388-merged.mount: Deactivated successfully.
Jan 21 09:18:02 np0005590528 podman[251510]: 2026-01-21 14:18:02.567113061 +0000 UTC m=+0.672875113 container remove 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 09:18:02 np0005590528 systemd[1]: libpod-conmon-3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8.scope: Deactivated successfully.
Jan 21 09:18:03 np0005590528 podman[251621]: 2026-01-21 14:18:03.041765666 +0000 UTC m=+0.049132890 container create 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:18:03 np0005590528 systemd[1]: Started libpod-conmon-22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf.scope.
Jan 21 09:18:03 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:18:03 np0005590528 podman[251621]: 2026-01-21 14:18:03.020043249 +0000 UTC m=+0.027410503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:18:03 np0005590528 podman[251621]: 2026-01-21 14:18:03.12910713 +0000 UTC m=+0.136474374 container init 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:18:03 np0005590528 podman[251621]: 2026-01-21 14:18:03.136678337 +0000 UTC m=+0.144045561 container start 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:18:03 np0005590528 podman[251621]: 2026-01-21 14:18:03.140209259 +0000 UTC m=+0.147576503 container attach 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:18:03 np0005590528 agitated_mcclintock[251637]: 167 167
Jan 21 09:18:03 np0005590528 systemd[1]: libpod-22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf.scope: Deactivated successfully.
Jan 21 09:18:03 np0005590528 podman[251621]: 2026-01-21 14:18:03.143433635 +0000 UTC m=+0.150800859 container died 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:18:03 np0005590528 systemd[1]: var-lib-containers-storage-overlay-90a3da2ce1e88f00dfcf93aa8f8b518d895f2a07b52b63880cd63d8d374b90f2-merged.mount: Deactivated successfully.
Jan 21 09:18:03 np0005590528 podman[251621]: 2026-01-21 14:18:03.188406447 +0000 UTC m=+0.195773671 container remove 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:18:03 np0005590528 systemd[1]: libpod-conmon-22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf.scope: Deactivated successfully.
Jan 21 09:18:03 np0005590528 podman[251661]: 2026-01-21 14:18:03.363233328 +0000 UTC m=+0.045077936 container create fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:18:03 np0005590528 systemd[1]: Started libpod-conmon-fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd.scope.
Jan 21 09:18:03 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:18:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1472fb157ea4e617a757eae9603b83d899337e7a23ad73f800b6e3fea72492cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1472fb157ea4e617a757eae9603b83d899337e7a23ad73f800b6e3fea72492cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1472fb157ea4e617a757eae9603b83d899337e7a23ad73f800b6e3fea72492cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:03 np0005590528 podman[251661]: 2026-01-21 14:18:03.343604789 +0000 UTC m=+0.025449417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:18:03 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1472fb157ea4e617a757eae9603b83d899337e7a23ad73f800b6e3fea72492cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:03 np0005590528 podman[251661]: 2026-01-21 14:18:03.449155398 +0000 UTC m=+0.131000036 container init fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:18:03 np0005590528 podman[251661]: 2026-01-21 14:18:03.455434425 +0000 UTC m=+0.137279033 container start fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:18:03 np0005590528 podman[251661]: 2026-01-21 14:18:03.458673711 +0000 UTC m=+0.140518339 container attach fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 09:18:03 np0005590528 loving_johnson[251678]: {
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:    "0": [
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:        {
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "devices": [
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "/dev/loop3"
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            ],
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_name": "ceph_lv0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_size": "21470642176",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "name": "ceph_lv0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "tags": {
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.cluster_name": "ceph",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.crush_device_class": "",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.encrypted": "0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.objectstore": "bluestore",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.osd_id": "0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.type": "block",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.vdo": "0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.with_tpm": "0"
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            },
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "type": "block",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "vg_name": "ceph_vg0"
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:        }
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:    ],
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:    "1": [
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:        {
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "devices": [
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "/dev/loop4"
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            ],
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_name": "ceph_lv1",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_size": "21470642176",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "name": "ceph_lv1",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "tags": {
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.cluster_name": "ceph",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.crush_device_class": "",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.encrypted": "0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.objectstore": "bluestore",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.osd_id": "1",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.type": "block",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.vdo": "0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.with_tpm": "0"
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            },
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "type": "block",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "vg_name": "ceph_vg1"
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:        }
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:    ],
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:    "2": [
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:        {
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "devices": [
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "/dev/loop5"
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            ],
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_name": "ceph_lv2",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_size": "21470642176",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "name": "ceph_lv2",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "tags": {
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.cluster_name": "ceph",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.crush_device_class": "",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.encrypted": "0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.objectstore": "bluestore",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.osd_id": "2",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.type": "block",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.vdo": "0",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:                "ceph.with_tpm": "0"
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            },
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "type": "block",
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:            "vg_name": "ceph_vg2"
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:        }
Jan 21 09:18:03 np0005590528 loving_johnson[251678]:    ]
Jan 21 09:18:03 np0005590528 loving_johnson[251678]: }
Jan 21 09:18:03 np0005590528 systemd[1]: libpod-fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd.scope: Deactivated successfully.
Jan 21 09:18:03 np0005590528 podman[251661]: 2026-01-21 14:18:03.751502112 +0000 UTC m=+0.433346710 container died fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 09:18:03 np0005590528 systemd[1]: var-lib-containers-storage-overlay-1472fb157ea4e617a757eae9603b83d899337e7a23ad73f800b6e3fea72492cd-merged.mount: Deactivated successfully.
Jan 21 09:18:03 np0005590528 podman[251661]: 2026-01-21 14:18:03.796612097 +0000 UTC m=+0.478456715 container remove fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:18:03 np0005590528 systemd[1]: libpod-conmon-fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd.scope: Deactivated successfully.
Jan 21 09:18:04 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "aefb1c02-8305-4e9b-9f91-87659561ca53", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:aefb1c02-8305-4e9b-9f91-87659561ca53, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 21 09:18:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:aefb1c02-8305-4e9b-9f91-87659561ca53, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 21 09:18:04 np0005590528 podman[251763]: 2026-01-21 14:18:04.237958864 +0000 UTC m=+0.041920272 container create 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:18:04 np0005590528 systemd[1]: Started libpod-conmon-66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5.scope.
Jan 21 09:18:04 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:18:04 np0005590528 podman[251763]: 2026-01-21 14:18:04.220212378 +0000 UTC m=+0.024173806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:18:04 np0005590528 podman[251763]: 2026-01-21 14:18:04.328474341 +0000 UTC m=+0.132435779 container init 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 09:18:04 np0005590528 podman[251763]: 2026-01-21 14:18:04.334368098 +0000 UTC m=+0.138329506 container start 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:18:04 np0005590528 compassionate_galileo[251779]: 167 167
Jan 21 09:18:04 np0005590528 podman[251763]: 2026-01-21 14:18:04.339352045 +0000 UTC m=+0.143313453 container attach 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 09:18:04 np0005590528 systemd[1]: libpod-66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5.scope: Deactivated successfully.
Jan 21 09:18:04 np0005590528 podman[251763]: 2026-01-21 14:18:04.340084292 +0000 UTC m=+0.144045690 container died 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:18:04 np0005590528 systemd[1]: var-lib-containers-storage-overlay-23b58e06fa2acb276e43418ea56e5a966e8ba5e02f516178947f18e99d9beafd-merged.mount: Deactivated successfully.
Jan 21 09:18:04 np0005590528 podman[251763]: 2026-01-21 14:18:04.382773411 +0000 UTC m=+0.186734839 container remove 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 09:18:04 np0005590528 systemd[1]: libpod-conmon-66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5.scope: Deactivated successfully.
Jan 21 09:18:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 9 op/s
Jan 21 09:18:04 np0005590528 podman[251802]: 2026-01-21 14:18:04.549542163 +0000 UTC m=+0.041686037 container create 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:18:04 np0005590528 systemd[1]: Started libpod-conmon-2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841.scope.
Jan 21 09:18:04 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:18:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ad93e9ccd78dfb14995d4f01c85998195086f7b472a2e765042b94f9df8012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ad93e9ccd78dfb14995d4f01c85998195086f7b472a2e765042b94f9df8012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ad93e9ccd78dfb14995d4f01c85998195086f7b472a2e765042b94f9df8012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:04 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ad93e9ccd78dfb14995d4f01c85998195086f7b472a2e765042b94f9df8012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:18:04 np0005590528 podman[251802]: 2026-01-21 14:18:04.529853412 +0000 UTC m=+0.021997306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:18:04 np0005590528 podman[251802]: 2026-01-21 14:18:04.62635197 +0000 UTC m=+0.118495844 container init 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:18:04 np0005590528 podman[251802]: 2026-01-21 14:18:04.633277842 +0000 UTC m=+0.125421716 container start 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 09:18:04 np0005590528 podman[251802]: 2026-01-21 14:18:04.636792934 +0000 UTC m=+0.128936878 container attach 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:18:04 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:18:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:18:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:04 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:18:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:18:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:04 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:04 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:05 np0005590528 lvm[251896]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:18:05 np0005590528 lvm[251897]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:18:05 np0005590528 lvm[251897]: VG ceph_vg1 finished
Jan 21 09:18:05 np0005590528 lvm[251896]: VG ceph_vg0 finished
Jan 21 09:18:05 np0005590528 lvm[251899]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:18:05 np0005590528 lvm[251899]: VG ceph_vg2 finished
Jan 21 09:18:05 np0005590528 inspiring_hugle[251818]: {}
Jan 21 09:18:05 np0005590528 systemd[1]: libpod-2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841.scope: Deactivated successfully.
Jan 21 09:18:05 np0005590528 systemd[1]: libpod-2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841.scope: Consumed 1.427s CPU time.
Jan 21 09:18:05 np0005590528 podman[251802]: 2026-01-21 14:18:05.533824582 +0000 UTC m=+1.025968516 container died 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:18:05 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "format": "json"}]: dispatch
Jan 21 09:18:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:05 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '03db482a-4a9f-44b3-ba43-fe5ff12e229e' of type subvolume
Jan 21 09:18:05 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:05.870+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '03db482a-4a9f-44b3-ba43-fe5ff12e229e' of type subvolume
Jan 21 09:18:05 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 09:18:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/03db482a-4a9f-44b3-ba43-fe5ff12e229e'' moved to trashcan
Jan 21 09:18:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:18:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 09:18:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 66 KiB/s wr, 9 op/s
Jan 21 09:18:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:07 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:08 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b4ad93e9ccd78dfb14995d4f01c85998195086f7b472a2e765042b94f9df8012-merged.mount: Deactivated successfully.
Jan 21 09:18:08 np0005590528 podman[251802]: 2026-01-21 14:18:08.111155892 +0000 UTC m=+3.603299756 container remove 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:18:08 np0005590528 systemd[1]: libpod-conmon-2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841.scope: Deactivated successfully.
Jan 21 09:18:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:18:08 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:18:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:18:08 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:18:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 65 KiB/s wr, 7 op/s
Jan 21 09:18:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:18:08 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:18:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 103 KiB/s wr, 11 op/s
Jan 21 09:18:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:18:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:18:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:18:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:18:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:18:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50ada2df0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50ada28b0>)]
Jan 21 09:18:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:18:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:18:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:12 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "701e45e4-89b7-4d59-81cf-4a02e67d640b", "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:18:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:701e45e4-89b7-4d59-81cf-4a02e67d640b, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 21 09:18:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:701e45e4-89b7-4d59-81cf-4a02e67d640b, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 21 09:18:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 6 op/s
Jan 21 09:18:12 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.tnwklj(active, since 33m)
Jan 21 09:18:13 np0005590528 podman[251943]: 2026-01-21 14:18:13.344748641 +0000 UTC m=+0.062523434 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:18:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:18:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:13 np0005590528 podman[251942]: 2026-01-21 14:18:13.382981295 +0000 UTC m=+0.100711387 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:18:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:18:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 09:18:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:18:13 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:18:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:18:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:18:13 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:18:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:18:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:18:13 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:18:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 7 op/s
Jan 21 09:18:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 8 op/s
Jan 21 09:18:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "701e45e4-89b7-4d59-81cf-4a02e67d640b", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:701e45e4-89b7-4d59-81cf-4a02e67d640b, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 21 09:18:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:701e45e4-89b7-4d59-81cf-4a02e67d640b, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:18:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/619f82dd-2461-43fe-994a-71a6fb22cc9a/f9498102-fea3-4cc1-a405-bc1e6a9a7838'.
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/619f82dd-2461-43fe-994a-71a6fb22cc9a/.meta.tmp'
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/619f82dd-2461-43fe-994a-71a6fb22cc9a/.meta.tmp' to config b'/volumes/_nogroup/619f82dd-2461-43fe-994a-71a6fb22cc9a/.meta'
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "format": "json"}]: dispatch
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d6e6fe01-b413-4bf6-b249-91dc19a3e3fc/a3719bda-5e93-4bc8-a6ba-43796639d277'.
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d6e6fe01-b413-4bf6-b249-91dc19a3e3fc/.meta.tmp'
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6e6fe01-b413-4bf6-b249-91dc19a3e3fc/.meta.tmp' to config b'/volumes/_nogroup/d6e6fe01-b413-4bf6-b249-91dc19a3e3fc/.meta'
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "format": "json"}]: dispatch
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:17 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 66 KiB/s wr, 6 op/s
Jan 21 09:18:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:18:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:19 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/1f606e4a-db26-4f08-a985-162ca262e6fc'.
Jan 21 09:18:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp'
Jan 21 09:18:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp' to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta'
Jan 21 09:18:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "format": "json"}]: dispatch
Jan 21 09:18:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:18:19 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 99 KiB/s wr, 10 op/s
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:18:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:18:20 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 09:18:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:18:20 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:18:20 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "format": "json"}]: dispatch
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:20.896+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '619f82dd-2461-43fe-994a-71a6fb22cc9a' of type subvolume
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '619f82dd-2461-43fe-994a-71a6fb22cc9a' of type subvolume
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/619f82dd-2461-43fe-994a-71a6fb22cc9a'' moved to trashcan
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:18:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 09:18:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:18:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:18:20 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:18:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "format": "json"}]: dispatch
Jan 21 09:18:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:21 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:21.104+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd6e6fe01-b413-4bf6-b249-91dc19a3e3fc' of type subvolume
Jan 21 09:18:21 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd6e6fe01-b413-4bf6-b249-91dc19a3e3fc' of type subvolume
Jan 21 09:18:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 09:18:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d6e6fe01-b413-4bf6-b249-91dc19a3e3fc'' moved to trashcan
Jan 21 09:18:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:18:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 09:18:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 61 KiB/s wr, 6 op/s
Jan 21 09:18:22 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "snap_name": "325147df-76fa-4b90-9267-80d02dee5e0b", "format": "json"}]: dispatch
Jan 21 09:18:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:23 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:18:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2885507471' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:18:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:18:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2885507471' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:18:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:18:24 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:18:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:18:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6b95bf55-eb7d-43c0-9f25-36884d529a89/d5ba67b3-4905-4db0-80f8-8f636f4190ab'.
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6b95bf55-eb7d-43c0-9f25-36884d529a89/.meta.tmp'
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6b95bf55-eb7d-43c0-9f25-36884d529a89/.meta.tmp' to config b'/volumes/_nogroup/6b95bf55-eb7d-43c0-9f25-36884d529a89/.meta'
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "format": "json"}]: dispatch
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 09:18:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:18:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 62 KiB/s wr, 8 op/s
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c1154fdb-2d56-4b79-b688-aa930d49c33b/c6b7d6f4-455b-41b1-b245-1cc7021ed1f6'.
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1154fdb-2d56-4b79-b688-aa930d49c33b/.meta.tmp'
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1154fdb-2d56-4b79-b688-aa930d49c33b/.meta.tmp' to config b'/volumes/_nogroup/c1154fdb-2d56-4b79-b688-aa930d49c33b/.meta'
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "format": "json"}]: dispatch
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 09:18:24 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 09:18:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:18:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:18:25 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:18:25 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:25 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 121 KiB/s wr, 12 op/s
Jan 21 09:18:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "snap_name": "325147df-76fa-4b90-9267-80d02dee5e0b_beb041ba-4500-43bc-91c9-794ff68a2025", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b_beb041ba-4500-43bc-91c9-794ff68a2025, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp'
Jan 21 09:18:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp' to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta'
Jan 21 09:18:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b_beb041ba-4500-43bc-91c9-794ff68a2025, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "snap_name": "325147df-76fa-4b90-9267-80d02dee5e0b", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp'
Jan 21 09:18:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp' to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta'
Jan 21 09:18:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 94 KiB/s wr, 10 op/s
Jan 21 09:18:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 09:18:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:18:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 09:18:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:18:28 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:18:28 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 09:18:28 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 09:18:28 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "format": "json"}]: dispatch
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6b95bf55-eb7d-43c0-9f25-36884d529a89' of type subvolume
Jan 21 09:18:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:28.786+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6b95bf55-eb7d-43c0-9f25-36884d529a89' of type subvolume
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6b95bf55-eb7d-43c0-9f25-36884d529a89'' moved to trashcan
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "format": "json"}]: dispatch
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:28.815+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1154fdb-2d56-4b79-b688-aa930d49c33b' of type subvolume
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1154fdb-2d56-4b79-b688-aa930d49c33b' of type subvolume
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c1154fdb-2d56-4b79-b688-aa930d49c33b'' moved to trashcan
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:18:28 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 09:18:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 147 KiB/s wr, 14 op/s
Jan 21 09:18:30 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "format": "json"}]: dispatch
Jan 21 09:18:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:30 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:30.841+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75787c7c-a801-4e74-8f54-f20d6b4880b0' of type subvolume
Jan 21 09:18:30 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75787c7c-a801-4e74-8f54-f20d6b4880b0' of type subvolume
Jan 21 09:18:30 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0'' moved to trashcan
Jan 21 09:18:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:18:30 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 09:18:31 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:18:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:18:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:18:31 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:18:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:18:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:18:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 21 09:18:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 21 09:18:32 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "snap_name": "ed4d40c5-f4bd-45ce-9692-e4bc79bb0372_73e67d86-f13b-459e-a543-f49785594c69", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372_73e67d86-f13b-459e-a543-f49785594c69, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp'
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp' to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta'
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372_73e67d86-f13b-459e-a543-f49785594c69, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "snap_name": "ed4d40c5-f4bd-45ce-9692-e4bc79bb0372", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp'
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp' to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta'
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:18:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 137 KiB/s wr, 13 op/s
Jan 21 09:18:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:18:33.909 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:18:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:18:33.910 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:18:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:18:33.910 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:18:34 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:18:34.364 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:18:34 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:18:34.366 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:18:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 137 KiB/s wr, 12 op/s
Jan 21 09:18:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:18:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:18:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:18:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 09:18:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:18:34 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:18:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:18:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:18:34 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:18:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:18:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:35 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:18:35.368 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:18:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:18:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:18:35 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:18:35 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "format": "json"}]: dispatch
Jan 21 09:18:35 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fb09730-0544-4361-97f8-11e56000d2f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:35 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fb09730-0544-4361-97f8-11e56000d2f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:18:35 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6fb09730-0544-4361-97f8-11e56000d2f0' of type subvolume
Jan 21 09:18:35 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:35.937+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6fb09730-0544-4361-97f8-11e56000d2f0' of type subvolume
Jan 21 09:18:35 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:35 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:18:35 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0'' moved to trashcan
Jan 21 09:18:35 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:18:35 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 09:18:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 110 KiB/s wr, 12 op/s
Jan 21 09:18:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 21 09:18:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 21 09:18:36 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 21 09:18:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 57 KiB/s wr, 8 op/s
Jan 21 09:18:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:18:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:18:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:18:38 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:18:39 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:18:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:18:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:18:39
Jan 21 09:18:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:18:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:18:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Jan 21 09:18:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:18:40 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:40 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 140 KiB/s wr, 15 op/s
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:18:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:18:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 21 09:18:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 21 09:18:41 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 21 09:18:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:18:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 09:18:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:18:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 09:18:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:18:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:18:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 09:18:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:18:42 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:18:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:18:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 09:18:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 09:18:42 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 09:18:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 143 KiB/s wr, 14 op/s
Jan 21 09:18:43 np0005590528 nova_compute[239261]: 2026-01-21 14:18:43.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:44 np0005590528 podman[251991]: 2026-01-21 14:18:44.340191458 +0000 UTC m=+0.056156775 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 09:18:44 np0005590528 podman[251990]: 2026-01-21 14:18:44.36762195 +0000 UTC m=+0.087733904 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 21 09:18:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 8 op/s
Jan 21 09:18:45 np0005590528 nova_compute[239261]: 2026-01-21 14:18:45.869 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 212 B/s rd, 91 KiB/s wr, 8 op/s
Jan 21 09:18:46 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:18:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:46 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:18:46 np0005590528 nova_compute[239261]: 2026-01-21 14:18:46.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.752784) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126752828, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1310, "num_deletes": 252, "total_data_size": 1605822, "memory_usage": 1639048, "flush_reason": "Manual Compaction"}
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126766778, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1576581, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24834, "largest_seqno": 26143, "table_properties": {"data_size": 1570254, "index_size": 3338, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15677, "raw_average_key_size": 21, "raw_value_size": 1556712, "raw_average_value_size": 2092, "num_data_blocks": 149, "num_entries": 744, "num_filter_entries": 744, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769005052, "oldest_key_time": 1769005052, "file_creation_time": 1769005126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14057 microseconds, and 7093 cpu microseconds.
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.766838) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1576581 bytes OK
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.766862) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.770329) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.770355) EVENT_LOG_v1 {"time_micros": 1769005126770349, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.770376) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1599388, prev total WAL file size 1599388, number of live WAL files 2.
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.771143) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1539KB)], [56(10MB)]
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126771205, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12479558, "oldest_snapshot_seqno": -1}
Jan 21 09:18:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:46 np0005590528 nova_compute[239261]: 2026-01-21 14:18:46.786 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:18:46 np0005590528 nova_compute[239261]: 2026-01-21 14:18:46.786 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:18:46 np0005590528 nova_compute[239261]: 2026-01-21 14:18:46.787 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:18:46 np0005590528 nova_compute[239261]: 2026-01-21 14:18:46.787 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:18:46 np0005590528 nova_compute[239261]: 2026-01-21 14:18:46.787 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5553 keys, 10672296 bytes, temperature: kUnknown
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126855859, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10672296, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10631179, "index_size": 26159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 138286, "raw_average_key_size": 24, "raw_value_size": 10527586, "raw_average_value_size": 1895, "num_data_blocks": 1091, "num_entries": 5553, "num_filter_entries": 5553, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769005126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.856222) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10672296 bytes
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.857927) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.2 rd, 125.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.4 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(14.7) write-amplify(6.8) OK, records in: 6080, records dropped: 527 output_compression: NoCompression
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.857948) EVENT_LOG_v1 {"time_micros": 1769005126857938, "job": 30, "event": "compaction_finished", "compaction_time_micros": 84802, "compaction_time_cpu_micros": 32203, "output_level": 6, "num_output_files": 1, "total_output_size": 10672296, "num_input_records": 6080, "num_output_records": 5553, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126858472, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126861292, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.771035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.861390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.861399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.861400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.861402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:18:46 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.861406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:18:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:18:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867212393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.345 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.492 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.493 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.493 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.494 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:18:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:47 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.757 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.757 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.871 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing inventories for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.966 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating ProviderTree inventory for provider 172aa181-ce4f-4953-808e-b8a26e60249f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.967 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating inventory in ProviderTree for provider 172aa181-ce4f-4953-808e-b8a26e60249f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 21 09:18:47 np0005590528 nova_compute[239261]: 2026-01-21 14:18:47.982 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing aggregate associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 21 09:18:48 np0005590528 nova_compute[239261]: 2026-01-21 14:18:48.007 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing trait associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 21 09:18:48 np0005590528 nova_compute[239261]: 2026-01-21 14:18:48.022 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:18:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 88 KiB/s wr, 8 op/s
Jan 21 09:18:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:18:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541470220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:18:48 np0005590528 nova_compute[239261]: 2026-01-21 14:18:48.565 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:18:48 np0005590528 nova_compute[239261]: 2026-01-21 14:18:48.574 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:18:48 np0005590528 nova_compute[239261]: 2026-01-21 14:18:48.592 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:18:48 np0005590528 nova_compute[239261]: 2026-01-21 14:18:48.595 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:18:48 np0005590528 nova_compute[239261]: 2026-01-21 14:18:48.596 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/0d39357a-8414-4374-b0d6-05a412ce9464'.
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "format": "json"}]: dispatch
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:18:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:18:49 np0005590528 nova_compute[239261]: 2026-01-21 14:18:49.596 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:49 np0005590528 nova_compute[239261]: 2026-01-21 14:18:49.597 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:18:49 np0005590528 nova_compute[239261]: 2026-01-21 14:18:49.597 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:18:49 np0005590528 nova_compute[239261]: 2026-01-21 14:18:49.622 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:18:49 np0005590528 nova_compute[239261]: 2026-01-21 14:18:49.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:49 np0005590528 nova_compute[239261]: 2026-01-21 14:18:49.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:49 np0005590528 nova_compute[239261]: 2026-01-21 14:18:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:18:49 np0005590528 nova_compute[239261]: 2026-01-21 14:18:49.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:49 np0005590528 nova_compute[239261]: 2026-01-21 14:18:49.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:18:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 09:18:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:18:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:18:49 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:18:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s wr, 5 op/s
Jan 21 09:18:50 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:50 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:18:50 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666226090619191 of space, bias 1.0, pg target 0.1998678271857573 quantized to 32 (current 32)
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004046650892931975 of space, bias 4.0, pg target 0.48559810715183704 quantized to 16 (current 16)
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:18:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:18:50 np0005590528 nova_compute[239261]: 2026-01-21 14:18:50.739 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:51 np0005590528 nova_compute[239261]: 2026-01-21 14:18:51.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:51 np0005590528 nova_compute[239261]: 2026-01-21 14:18:51.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s wr, 5 op/s
Jan 21 09:18:53 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "e9dfc6f5-6817-4818-8b7a-6638ecfd5d54", "format": "json"}]: dispatch
Jan 21 09:18:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:54 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 09:18:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:18:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:54 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:18:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:18:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:18:54 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5", "format": "json"}]: dispatch
Jan 21 09:18:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:54 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 5 op/s
Jan 21 09:18:54 np0005590528 nova_compute[239261]: 2026-01-21 14:18:54.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:18:54 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5_3865f07e-c822-4316-8c79-2b1c82ad80c4", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5_3865f07e-c822-4316-8c79-2b1c82ad80c4, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5_3865f07e-c822-4316-8c79-2b1c82ad80c4, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5", "force": true, "format": "json"}]: dispatch
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:18:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 88 KiB/s wr, 8 op/s
Jan 21 09:18:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 74 KiB/s wr, 7 op/s
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 09:18:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 09:18:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:18:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:18:59 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:18:59 np0005590528 nova_compute[239261]: 2026-01-21 14:18:59.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:18:59 np0005590528 nova_compute[239261]: 2026-01-21 14:18:59.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 21 09:18:59 np0005590528 nova_compute[239261]: 2026-01-21 14:18:59.750 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 21 09:18:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 09:18:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 09:18:59 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fda77d42-2af2-4751-8381-d4861d82e3b5", "format": "json"}]: dispatch
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:18:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 110 KiB/s wr, 10 op/s
Jan 21 09:19:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 21 09:19:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 21 09:19:02 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 21 09:19:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 9 op/s
Jan 21 09:19:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:19:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:19:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 21 09:19:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 09:19:03 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 09:19:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 09:19:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:19:03 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:19:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:19:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 09:19:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 09:19:03 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 09:19:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 9 op/s
Jan 21 09:19:05 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fda77d42-2af2-4751-8381-d4861d82e3b5_882c1b0b-28af-4e8f-82ab-4260c6bbe2e1", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5_882c1b0b-28af-4e8f-82ab-4260c6bbe2e1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:19:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:19:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5_882c1b0b-28af-4e8f-82ab-4260c6bbe2e1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:05 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fda77d42-2af2-4751-8381-d4861d82e3b5", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:19:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:19:05 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Jan 21 09:19:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:19:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 09:19:07 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57'.
Jan 21 09:19:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/.meta.tmp'
Jan 21 09:19:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/.meta.tmp' to config b'/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/.meta'
Jan 21 09:19:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 09:19:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "format": "json"}]: dispatch
Jan 21 09:19:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 09:19:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 09:19:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:19:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:19:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:19:09 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fe9bc1b3-d5c9-4565-8fc7-bafb91560e19", "format": "json"}]: dispatch
Jan 21 09:19:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:09 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:19:09 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:19:10 np0005590528 podman[252226]: 2026-01-21 14:19:10.074807138 +0000 UTC m=+0.041858010 container create ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 09:19:10 np0005590528 systemd[1]: Started libpod-conmon-ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8.scope.
Jan 21 09:19:10 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:19:10 np0005590528 podman[252226]: 2026-01-21 14:19:10.058469136 +0000 UTC m=+0.025520028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:19:10 np0005590528 podman[252226]: 2026-01-21 14:19:10.16335376 +0000 UTC m=+0.130404662 container init ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 09:19:10 np0005590528 podman[252226]: 2026-01-21 14:19:10.172878723 +0000 UTC m=+0.139929605 container start ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:19:10 np0005590528 podman[252226]: 2026-01-21 14:19:10.176389906 +0000 UTC m=+0.143440798 container attach ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:19:10 np0005590528 systemd[1]: libpod-ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8.scope: Deactivated successfully.
Jan 21 09:19:10 np0005590528 cranky_wescoff[252242]: 167 167
Jan 21 09:19:10 np0005590528 podman[252226]: 2026-01-21 14:19:10.183117112 +0000 UTC m=+0.150168004 container died ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:19:10 np0005590528 conmon[252242]: conmon ec401cbc4e3f48133ba7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8.scope/container/memory.events
Jan 21 09:19:10 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3d261991db5eab680448f4653c44821a4b6d29c821b40fa57ed658ed9606f2c8-merged.mount: Deactivated successfully.
Jan 21 09:19:10 np0005590528 podman[252226]: 2026-01-21 14:19:10.237913945 +0000 UTC m=+0.204964847 container remove ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 09:19:10 np0005590528 systemd[1]: libpod-conmon-ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8.scope: Deactivated successfully.
Jan 21 09:19:10 np0005590528 podman[252268]: 2026-01-21 14:19:10.435018646 +0000 UTC m=+0.045054885 container create 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:19:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 73 KiB/s wr, 7 op/s
Jan 21 09:19:10 np0005590528 systemd[1]: Started libpod-conmon-6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2.scope.
Jan 21 09:19:10 np0005590528 podman[252268]: 2026-01-21 14:19:10.414857085 +0000 UTC m=+0.024893354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:19:10 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:19:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:10 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:10 np0005590528 podman[252268]: 2026-01-21 14:19:10.545419419 +0000 UTC m=+0.155455718 container init 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 09:19:10 np0005590528 podman[252268]: 2026-01-21 14:19:10.562729635 +0000 UTC m=+0.172765904 container start 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:19:10 np0005590528 podman[252268]: 2026-01-21 14:19:10.566927783 +0000 UTC m=+0.176964122 container attach 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:19:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:19:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:19:10 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:19:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:19:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:19:11 np0005590528 sad_sanderson[252284]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:19:11 np0005590528 sad_sanderson[252284]: --> All data devices are unavailable
Jan 21 09:19:11 np0005590528 systemd[1]: libpod-6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2.scope: Deactivated successfully.
Jan 21 09:19:11 np0005590528 podman[252268]: 2026-01-21 14:19:11.108087244 +0000 UTC m=+0.718123483 container died 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 09:19:11 np0005590528 systemd[1]: var-lib-containers-storage-overlay-29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217-merged.mount: Deactivated successfully.
Jan 21 09:19:11 np0005590528 podman[252268]: 2026-01-21 14:19:11.152399521 +0000 UTC m=+0.762435780 container remove 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:19:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:19:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:19:11 np0005590528 systemd[1]: libpod-conmon-6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2.scope: Deactivated successfully.
Jan 21 09:19:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:19:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:19:11 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "auth_id": "bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 09:19:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575,allow rw path=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c707dfa3-0985-4b01-bd2d-86b20bf31443"]} v 0)
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575,allow rw path=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c707dfa3-0985-4b01-bd2d-86b20bf31443"]} : dispatch
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575,allow rw path=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c707dfa3-0985-4b01-bd2d-86b20bf31443"]}]': finished
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 09:19:11 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575,allow rw path=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c707dfa3-0985-4b01-bd2d-86b20bf31443"]} : dispatch
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575,allow rw path=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c707dfa3-0985-4b01-bd2d-86b20bf31443"]}]': finished
Jan 21 09:19:11 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 09:19:11 np0005590528 podman[252379]: 2026-01-21 14:19:11.665116246 +0000 UTC m=+0.047856270 container create 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:19:11 np0005590528 systemd[1]: Started libpod-conmon-866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c.scope.
Jan 21 09:19:11 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:19:11 np0005590528 podman[252379]: 2026-01-21 14:19:11.648077908 +0000 UTC m=+0.030817952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:19:11 np0005590528 podman[252379]: 2026-01-21 14:19:11.743542771 +0000 UTC m=+0.126282825 container init 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 09:19:11 np0005590528 podman[252379]: 2026-01-21 14:19:11.754035356 +0000 UTC m=+0.136775390 container start 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 09:19:11 np0005590528 podman[252379]: 2026-01-21 14:19:11.758797858 +0000 UTC m=+0.141538092 container attach 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:19:11 np0005590528 great_curran[252396]: 167 167
Jan 21 09:19:11 np0005590528 systemd[1]: libpod-866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c.scope: Deactivated successfully.
Jan 21 09:19:11 np0005590528 podman[252379]: 2026-01-21 14:19:11.763230212 +0000 UTC m=+0.145970236 container died 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:19:11 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e35f387ad88eba1b4c7b2cde2f87c8ff8e1052d28b9d4a093f819539a74a6806-merged.mount: Deactivated successfully.
Jan 21 09:19:11 np0005590528 podman[252379]: 2026-01-21 14:19:11.808927161 +0000 UTC m=+0.191667185 container remove 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:19:11 np0005590528 systemd[1]: libpod-conmon-866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c.scope: Deactivated successfully.
Jan 21 09:19:12 np0005590528 podman[252420]: 2026-01-21 14:19:12.009065144 +0000 UTC m=+0.059866952 container create 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:19:12 np0005590528 systemd[1]: Started libpod-conmon-5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd.scope.
Jan 21 09:19:12 np0005590528 podman[252420]: 2026-01-21 14:19:11.981245833 +0000 UTC m=+0.032047641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:19:12 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:19:12 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82396bcdc37d95194b2313ee4ff8a8f7afc63c21b69a5c88c6fa7c51fb65378/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:12 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82396bcdc37d95194b2313ee4ff8a8f7afc63c21b69a5c88c6fa7c51fb65378/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:12 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82396bcdc37d95194b2313ee4ff8a8f7afc63c21b69a5c88c6fa7c51fb65378/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:12 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82396bcdc37d95194b2313ee4ff8a8f7afc63c21b69a5c88c6fa7c51fb65378/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:12 np0005590528 podman[252420]: 2026-01-21 14:19:12.107716441 +0000 UTC m=+0.158518299 container init 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 09:19:12 np0005590528 podman[252420]: 2026-01-21 14:19:12.116233861 +0000 UTC m=+0.167035659 container start 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:19:12 np0005590528 podman[252420]: 2026-01-21 14:19:12.120539141 +0000 UTC m=+0.171341029 container attach 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]: {
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:    "0": [
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:        {
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "devices": [
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "/dev/loop3"
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            ],
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_name": "ceph_lv0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_size": "21470642176",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "name": "ceph_lv0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "tags": {
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.cluster_name": "ceph",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.crush_device_class": "",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.encrypted": "0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.objectstore": "bluestore",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.osd_id": "0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.type": "block",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.vdo": "0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.with_tpm": "0"
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            },
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "type": "block",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "vg_name": "ceph_vg0"
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:        }
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:    ],
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:    "1": [
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:        {
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "devices": [
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "/dev/loop4"
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            ],
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_name": "ceph_lv1",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_size": "21470642176",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "name": "ceph_lv1",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "tags": {
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.cluster_name": "ceph",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.crush_device_class": "",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.encrypted": "0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.objectstore": "bluestore",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.osd_id": "1",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.type": "block",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.vdo": "0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.with_tpm": "0"
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            },
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "type": "block",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "vg_name": "ceph_vg1"
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:        }
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:    ],
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:    "2": [
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:        {
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "devices": [
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "/dev/loop5"
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            ],
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_name": "ceph_lv2",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_size": "21470642176",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "name": "ceph_lv2",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "tags": {
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.cluster_name": "ceph",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.crush_device_class": "",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.encrypted": "0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.objectstore": "bluestore",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.osd_id": "2",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.type": "block",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.vdo": "0",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:                "ceph.with_tpm": "0"
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            },
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "type": "block",
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:            "vg_name": "ceph_vg2"
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:        }
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]:    ]
Jan 21 09:19:12 np0005590528 stupefied_keller[252437]: }
Jan 21 09:19:12 np0005590528 systemd[1]: libpod-5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd.scope: Deactivated successfully.
Jan 21 09:19:12 np0005590528 conmon[252437]: conmon 5da9708234c7a7c6c3f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd.scope/container/memory.events
Jan 21 09:19:12 np0005590528 podman[252420]: 2026-01-21 14:19:12.425330753 +0000 UTC m=+0.476132551 container died 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 09:19:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 21 09:19:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 21 09:19:12 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 21 09:19:12 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b82396bcdc37d95194b2313ee4ff8a8f7afc63c21b69a5c88c6fa7c51fb65378-merged.mount: Deactivated successfully.
Jan 21 09:19:12 np0005590528 podman[252420]: 2026-01-21 14:19:12.474741028 +0000 UTC m=+0.525542866 container remove 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 09:19:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 91 KiB/s wr, 8 op/s
Jan 21 09:19:12 np0005590528 systemd[1]: libpod-conmon-5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd.scope: Deactivated successfully.
Jan 21 09:19:13 np0005590528 podman[252520]: 2026-01-21 14:19:13.004520914 +0000 UTC m=+0.054299032 container create 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 09:19:13 np0005590528 systemd[1]: Started libpod-conmon-7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da.scope.
Jan 21 09:19:13 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:19:13 np0005590528 podman[252520]: 2026-01-21 14:19:12.979720573 +0000 UTC m=+0.029498671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:19:13 np0005590528 podman[252520]: 2026-01-21 14:19:13.098540643 +0000 UTC m=+0.148318811 container init 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 09:19:13 np0005590528 podman[252520]: 2026-01-21 14:19:13.107063453 +0000 UTC m=+0.156841551 container start 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 09:19:13 np0005590528 podman[252520]: 2026-01-21 14:19:13.11123118 +0000 UTC m=+0.161009378 container attach 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 09:19:13 np0005590528 eloquent_joliot[252536]: 167 167
Jan 21 09:19:13 np0005590528 systemd[1]: libpod-7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da.scope: Deactivated successfully.
Jan 21 09:19:13 np0005590528 conmon[252536]: conmon 7e9d8c6d849c486800be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da.scope/container/memory.events
Jan 21 09:19:13 np0005590528 podman[252520]: 2026-01-21 14:19:13.11594184 +0000 UTC m=+0.165719978 container died 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 09:19:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fe9bc1b3-d5c9-4565-8fc7-bafb91560e19_b761fcee-f754-4df5-8daa-e066bd13935b", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19_b761fcee-f754-4df5-8daa-e066bd13935b, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:13 np0005590528 systemd[1]: var-lib-containers-storage-overlay-2d5fd12b78a44c9364de32a6ca20a49c0a04138e0af7864cee08f73a58c862d6-merged.mount: Deactivated successfully.
Jan 21 09:19:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:19:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:19:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19_b761fcee-f754-4df5-8daa-e066bd13935b, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:13 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fe9bc1b3-d5c9-4565-8fc7-bafb91560e19", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:13 np0005590528 podman[252520]: 2026-01-21 14:19:13.165292195 +0000 UTC m=+0.215070293 container remove 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:19:13 np0005590528 systemd[1]: libpod-conmon-7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da.scope: Deactivated successfully.
Jan 21 09:19:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:19:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:19:13 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:13 np0005590528 podman[252561]: 2026-01-21 14:19:13.360517583 +0000 UTC m=+0.053526193 container create ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:19:13 np0005590528 systemd[1]: Started libpod-conmon-ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b.scope.
Jan 21 09:19:13 np0005590528 podman[252561]: 2026-01-21 14:19:13.342274046 +0000 UTC m=+0.035282626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:19:13 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:19:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0ebb76426b003a4d696ef79010ef32384a82e5bdf7ccf62f4625ebecc434da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0ebb76426b003a4d696ef79010ef32384a82e5bdf7ccf62f4625ebecc434da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0ebb76426b003a4d696ef79010ef32384a82e5bdf7ccf62f4625ebecc434da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:13 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0ebb76426b003a4d696ef79010ef32384a82e5bdf7ccf62f4625ebecc434da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:19:13 np0005590528 podman[252561]: 2026-01-21 14:19:13.470179728 +0000 UTC m=+0.163188358 container init ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 09:19:13 np0005590528 podman[252561]: 2026-01-21 14:19:13.478573034 +0000 UTC m=+0.171581604 container start ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 09:19:13 np0005590528 podman[252561]: 2026-01-21 14:19:13.482060447 +0000 UTC m=+0.175069067 container attach ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 09:19:14 np0005590528 lvm[252656]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:19:14 np0005590528 lvm[252655]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:19:14 np0005590528 lvm[252656]: VG ceph_vg1 finished
Jan 21 09:19:14 np0005590528 lvm[252655]: VG ceph_vg0 finished
Jan 21 09:19:14 np0005590528 lvm[252658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:19:14 np0005590528 lvm[252658]: VG ceph_vg2 finished
Jan 21 09:19:14 np0005590528 cranky_chatterjee[252577]: {}
Jan 21 09:19:14 np0005590528 systemd[1]: libpod-ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b.scope: Deactivated successfully.
Jan 21 09:19:14 np0005590528 podman[252561]: 2026-01-21 14:19:14.343510711 +0000 UTC m=+1.036519291 container died ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 09:19:14 np0005590528 systemd[1]: libpod-ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b.scope: Consumed 1.465s CPU time.
Jan 21 09:19:14 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ab0ebb76426b003a4d696ef79010ef32384a82e5bdf7ccf62f4625ebecc434da-merged.mount: Deactivated successfully.
Jan 21 09:19:14 np0005590528 podman[252561]: 2026-01-21 14:19:14.393666025 +0000 UTC m=+1.086674615 container remove ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 21 09:19:14 np0005590528 systemd[1]: libpod-conmon-ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b.scope: Deactivated successfully.
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:19:14 np0005590528 podman[252672]: 2026-01-21 14:19:14.478478219 +0000 UTC m=+0.070476770 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 21 09:19:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 52 KiB/s wr, 5 op/s
Jan 21 09:19:14 np0005590528 podman[252674]: 2026-01-21 14:19:14.514635965 +0000 UTC m=+0.106633026 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:19:14 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 09:19:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b"]} v 0)
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b"]} : dispatch
Jan 21 09:19:14 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b"]}]': finished
Jan 21 09:19:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 09:19:14 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 09:19:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 09:19:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57
Jan 21 09:19:14 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57],prefix=session evict} (starting...)
Jan 21 09:19:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:19:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 09:19:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 09:19:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b"]} : dispatch
Jan 21 09:19:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b"]}]': finished
Jan 21 09:19:16 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 105 KiB/s wr, 8 op/s
Jan 21 09:19:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "d0adc3df-76d3-4b70-bbe5-e57bff1140d1", "format": "json"}]: dispatch
Jan 21 09:19:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 21 09:19:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 21 09:19:17 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 21 09:19:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s wr, 4 op/s
Jan 21 09:19:18 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 09:19:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:19:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 21 09:19:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 09:19:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0)
Jan 21 09:19:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Jan 21 09:19:18 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Jan 21 09:19:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:19:18 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 09:19:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:19:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 09:19:18 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 09:19:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 09:19:18 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:19:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 09:19:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Jan 21 09:19:18 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Jan 21 09:19:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 142 KiB/s wr, 11 op/s
Jan 21 09:19:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 21 09:19:21 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 21 09:19:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 21 09:19:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "d0adc3df-76d3-4b70-bbe5-e57bff1140d1_6730d26a-6aff-40af-a601-82d487d21c1b", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1_6730d26a-6aff-40af-a601-82d487d21c1b, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:19:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:19:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1_6730d26a-6aff-40af-a601-82d487d21c1b, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:21 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "d0adc3df-76d3-4b70-bbe5-e57bff1140d1", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:19:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:19:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 142 KiB/s wr, 10 op/s
Jan 21 09:19:22 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "format": "json"}]: dispatch
Jan 21 09:19:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:19:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:19:22 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:19:22.837+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '424167b3-6c3d-4062-8da1-4d053af4cf7b' of type subvolume
Jan 21 09:19:22 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '424167b3-6c3d-4062-8da1-4d053af4cf7b' of type subvolume
Jan 21 09:19:22 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:19:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b'' moved to trashcan
Jan 21 09:19:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:19:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 09:19:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:19:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3330434979' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:19:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:19:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3330434979' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:19:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 89 KiB/s wr, 7 op/s
Jan 21 09:19:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 473 B/s rd, 129 KiB/s wr, 9 op/s
Jan 21 09:19:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 21 09:19:26 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 21 09:19:26 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 21 09:19:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "737b967c-2386-438d-aa9d-7e9a039e9aac", "format": "json"}]: dispatch
Jan 21 09:19:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 51 KiB/s wr, 3 op/s
Jan 21 09:19:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 463 B/s rd, 72 KiB/s wr, 5 op/s
Jan 21 09:19:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 21 09:19:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 21 09:19:31 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 21 09:19:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 79 KiB/s wr, 5 op/s
Jan 21 09:19:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "737b967c-2386-438d-aa9d-7e9a039e9aac_c69e25ee-cc9d-4429-8a2a-8711a855d3dd", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac_c69e25ee-cc9d-4429-8a2a-8711a855d3dd, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:19:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:19:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac_c69e25ee-cc9d-4429-8a2a-8711a855d3dd, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "737b967c-2386-438d-aa9d-7e9a039e9aac", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:19:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:19:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:19:33.910 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:19:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:19:33.911 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:19:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:19:33.911 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:19:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 28 KiB/s wr, 2 op/s
Jan 21 09:19:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 214 B/s rd, 47 KiB/s wr, 4 op/s
Jan 21 09:19:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 45 KiB/s wr, 3 op/s
Jan 21 09:19:39 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:19:39.456 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:19:39 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:19:39.458 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:19:39 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:19:39.459 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:19:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "e9dfc6f5-6817-4818-8b7a-6638ecfd5d54_50188602-9386-4740-9326-44acaedb4caa", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:39 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54_50188602-9386-4740-9326-44acaedb4caa, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:19:39
Jan 21 09:19:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:19:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:19:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'volumes']
Jan 21 09:19:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:19:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 2 op/s
Jan 21 09:19:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:19:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:19:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54_50188602-9386-4740-9326-44acaedb4caa, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:40 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "e9dfc6f5-6817-4818-8b7a-6638ecfd5d54", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:19:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:19:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 21 09:19:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 188 B/s rd, 23 KiB/s wr, 2 op/s
Jan 21 09:19:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 21 09:19:42 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 21 09:19:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "format": "json"}]: dispatch
Jan 21 09:19:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4fe02932-2d04-427d-b4f6-1c341396704b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:19:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4fe02932-2d04-427d-b4f6-1c341396704b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:19:43 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:19:43.136+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4fe02932-2d04-427d-b4f6-1c341396704b' of type subvolume
Jan 21 09:19:43 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4fe02932-2d04-427d-b4f6-1c341396704b' of type subvolume
Jan 21 09:19:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "force": true, "format": "json"}]: dispatch
Jan 21 09:19:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b'' moved to trashcan
Jan 21 09:19:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:19:43 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 09:19:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 21 09:19:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 21 09:19:43 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 21 09:19:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.7 KiB/s wr, 1 op/s
Jan 21 09:19:45 np0005590528 podman[252745]: 2026-01-21 14:19:45.363319963 +0000 UTC m=+0.082418399 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 21 09:19:45 np0005590528 podman[252746]: 2026-01-21 14:19:45.364570983 +0000 UTC m=+0.076963612 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 09:19:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 63 KiB/s wr, 5 op/s
Jan 21 09:19:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 21 09:19:47 np0005590528 nova_compute[239261]: 2026-01-21 14:19:47.750 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:19:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 21 09:19:48 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 21 09:19:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 79 KiB/s wr, 5 op/s
Jan 21 09:19:48 np0005590528 nova_compute[239261]: 2026-01-21 14:19:48.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:19:48 np0005590528 nova_compute[239261]: 2026-01-21 14:19:48.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:19:48 np0005590528 nova_compute[239261]: 2026-01-21 14:19:48.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:19:48 np0005590528 nova_compute[239261]: 2026-01-21 14:19:48.741 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:19:48 np0005590528 nova_compute[239261]: 2026-01-21 14:19:48.742 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:19:48 np0005590528 nova_compute[239261]: 2026-01-21 14:19:48.768 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:19:48 np0005590528 nova_compute[239261]: 2026-01-21 14:19:48.769 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:19:48 np0005590528 nova_compute[239261]: 2026-01-21 14:19:48.769 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:19:48 np0005590528 nova_compute[239261]: 2026-01-21 14:19:48.769 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:19:48 np0005590528 nova_compute[239261]: 2026-01-21 14:19:48.770 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:19:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:19:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867554194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:19:49 np0005590528 nova_compute[239261]: 2026-01-21 14:19:49.358 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:19:49 np0005590528 nova_compute[239261]: 2026-01-21 14:19:49.518 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:19:49 np0005590528 nova_compute[239261]: 2026-01-21 14:19:49.519 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:19:49 np0005590528 nova_compute[239261]: 2026-01-21 14:19:49.520 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:19:49 np0005590528 nova_compute[239261]: 2026-01-21 14:19:49.520 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:19:49 np0005590528 nova_compute[239261]: 2026-01-21 14:19:49.582 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:19:49 np0005590528 nova_compute[239261]: 2026-01-21 14:19:49.583 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:19:49 np0005590528 nova_compute[239261]: 2026-01-21 14:19:49.596 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:19:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:19:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/32715997' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:19:50 np0005590528 nova_compute[239261]: 2026-01-21 14:19:50.230 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:19:50 np0005590528 nova_compute[239261]: 2026-01-21 14:19:50.235 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:19:50 np0005590528 nova_compute[239261]: 2026-01-21 14:19:50.250 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:19:50 np0005590528 nova_compute[239261]: 2026-01-21 14:19:50.252 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:19:50 np0005590528 nova_compute[239261]: 2026-01-21 14:19:50.252 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 529 B/s rd, 62 KiB/s wr, 5 op/s
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662177847039849 of space, bias 1.0, pg target 0.19986533541119547 quantized to 32 (current 32)
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004678936436856764 of space, bias 4.0, pg target 0.5614723724228117 quantized to 16 (current 16)
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:19:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:19:52 np0005590528 nova_compute[239261]: 2026-01-21 14:19:52.234 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:19:52 np0005590528 nova_compute[239261]: 2026-01-21 14:19:52.235 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:19:52 np0005590528 nova_compute[239261]: 2026-01-21 14:19:52.235 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:19:52 np0005590528 nova_compute[239261]: 2026-01-21 14:19:52.235 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:19:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 21 09:19:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 469 B/s rd, 55 KiB/s wr, 4 op/s
Jan 21 09:19:52 np0005590528 nova_compute[239261]: 2026-01-21 14:19:52.719 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:19:52 np0005590528 nova_compute[239261]: 2026-01-21 14:19:52.790 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:19:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 21 09:19:52 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 21 09:19:53 np0005590528 nova_compute[239261]: 2026-01-21 14:19:53.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:19:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 639 B/s wr, 1 op/s
Jan 21 09:19:54 np0005590528 nova_compute[239261]: 2026-01-21 14:19:54.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:19:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 253 B/s rd, 19 KiB/s wr, 2 op/s
Jan 21 09:19:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:19:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 15 KiB/s wr, 1 op/s
Jan 21 09:20:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Jan 21 09:20:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Jan 21 09:20:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 09:20:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Jan 21 09:20:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 09:20:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 09:20:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:20:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:20:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:20:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:20:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:20:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:20:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 09:20:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:20:15 np0005590528 podman[252966]: 2026-01-21 14:20:15.490318761 +0000 UTC m=+0.054926056 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:20:15 np0005590528 podman[252965]: 2026-01-21 14:20:15.537497045 +0000 UTC m=+0.104802643 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 21 09:20:15 np0005590528 podman[253023]: 2026-01-21 14:20:15.723799494 +0000 UTC m=+0.061614282 container create 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:20:15 np0005590528 systemd[1]: Started libpod-conmon-411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62.scope.
Jan 21 09:20:15 np0005590528 podman[253023]: 2026-01-21 14:20:15.694016417 +0000 UTC m=+0.031831235 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:20:15 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:20:15 np0005590528 podman[253023]: 2026-01-21 14:20:15.826001895 +0000 UTC m=+0.163816723 container init 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:20:15 np0005590528 podman[253023]: 2026-01-21 14:20:15.834366911 +0000 UTC m=+0.172181689 container start 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:20:15 np0005590528 podman[253023]: 2026-01-21 14:20:15.838222391 +0000 UTC m=+0.176037179 container attach 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:20:15 np0005590528 gracious_ganguly[253040]: 167 167
Jan 21 09:20:15 np0005590528 systemd[1]: libpod-411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62.scope: Deactivated successfully.
Jan 21 09:20:15 np0005590528 podman[253023]: 2026-01-21 14:20:15.840848242 +0000 UTC m=+0.178663020 container died 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 09:20:15 np0005590528 systemd[1]: var-lib-containers-storage-overlay-04ecaa4801fa46375682519307b2084c8ab1485fa2d55afab8760318869808d6-merged.mount: Deactivated successfully.
Jan 21 09:20:15 np0005590528 podman[253023]: 2026-01-21 14:20:15.899355431 +0000 UTC m=+0.237170199 container remove 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 21 09:20:15 np0005590528 systemd[1]: libpod-conmon-411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62.scope: Deactivated successfully.
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:20:15 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:20:16 np0005590528 podman[253065]: 2026-01-21 14:20:16.110775048 +0000 UTC m=+0.048668350 container create 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:20:16 np0005590528 systemd[1]: Started libpod-conmon-350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2.scope.
Jan 21 09:20:16 np0005590528 podman[253065]: 2026-01-21 14:20:16.09293151 +0000 UTC m=+0.030824832 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:20:16 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:20:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:16 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:16 np0005590528 podman[253065]: 2026-01-21 14:20:16.207091451 +0000 UTC m=+0.144984783 container init 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:20:16 np0005590528 podman[253065]: 2026-01-21 14:20:16.216988403 +0000 UTC m=+0.154881705 container start 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:20:16 np0005590528 podman[253065]: 2026-01-21 14:20:16.220465374 +0000 UTC m=+0.158358676 container attach 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:20:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 09:20:16 np0005590528 jolly_swanson[253081]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:20:16 np0005590528 jolly_swanson[253081]: --> All data devices are unavailable
Jan 21 09:20:16 np0005590528 systemd[1]: libpod-350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2.scope: Deactivated successfully.
Jan 21 09:20:16 np0005590528 podman[253065]: 2026-01-21 14:20:16.70073931 +0000 UTC m=+0.638632602 container died 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 09:20:16 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db-merged.mount: Deactivated successfully.
Jan 21 09:20:16 np0005590528 podman[253065]: 2026-01-21 14:20:16.743398289 +0000 UTC m=+0.681291591 container remove 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:20:16 np0005590528 systemd[1]: libpod-conmon-350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2.scope: Deactivated successfully.
Jan 21 09:20:17 np0005590528 podman[253175]: 2026-01-21 14:20:17.186821644 +0000 UTC m=+0.026844750 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:20:17 np0005590528 podman[253175]: 2026-01-21 14:20:17.538160823 +0000 UTC m=+0.378183839 container create b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:20:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:18 np0005590528 systemd[1]: Started libpod-conmon-b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550.scope.
Jan 21 09:20:18 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:20:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:20:18 np0005590528 podman[253175]: 2026-01-21 14:20:18.648020511 +0000 UTC m=+1.488043617 container init b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:20:18 np0005590528 podman[253175]: 2026-01-21 14:20:18.659970541 +0000 UTC m=+1.499993597 container start b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 09:20:18 np0005590528 blissful_chandrasekhar[253192]: 167 167
Jan 21 09:20:18 np0005590528 podman[253175]: 2026-01-21 14:20:18.666415551 +0000 UTC m=+1.506438607 container attach b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:20:18 np0005590528 systemd[1]: libpod-b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550.scope: Deactivated successfully.
Jan 21 09:20:18 np0005590528 podman[253197]: 2026-01-21 14:20:18.724335537 +0000 UTC m=+0.033789702 container died b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:20:18 np0005590528 systemd[1]: var-lib-containers-storage-overlay-40c5627c9912160aa68553b5565a684b9217f2136ea11be64b05258843771390-merged.mount: Deactivated successfully.
Jan 21 09:20:18 np0005590528 podman[253197]: 2026-01-21 14:20:18.903973029 +0000 UTC m=+0.213427154 container remove b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 09:20:18 np0005590528 systemd[1]: libpod-conmon-b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550.scope: Deactivated successfully.
Jan 21 09:20:19 np0005590528 podman[253219]: 2026-01-21 14:20:19.102470524 +0000 UTC m=+0.043752766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:20:19 np0005590528 podman[253219]: 2026-01-21 14:20:19.214017283 +0000 UTC m=+0.155299435 container create 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:20:19 np0005590528 systemd[1]: Started libpod-conmon-3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b.scope.
Jan 21 09:20:19 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:20:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba8a4720be0ec1347715e2c64186796e2051fe3a6b34dd7101ad6dd4bd53c2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba8a4720be0ec1347715e2c64186796e2051fe3a6b34dd7101ad6dd4bd53c2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba8a4720be0ec1347715e2c64186796e2051fe3a6b34dd7101ad6dd4bd53c2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:19 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba8a4720be0ec1347715e2c64186796e2051fe3a6b34dd7101ad6dd4bd53c2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:19 np0005590528 podman[253219]: 2026-01-21 14:20:19.375668976 +0000 UTC m=+0.316951138 container init 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:20:19 np0005590528 podman[253219]: 2026-01-21 14:20:19.384245466 +0000 UTC m=+0.325527608 container start 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 21 09:20:19 np0005590528 podman[253219]: 2026-01-21 14:20:19.436290793 +0000 UTC m=+0.377572965 container attach 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]: {
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:    "0": [
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:        {
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "devices": [
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "/dev/loop3"
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            ],
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_name": "ceph_lv0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_size": "21470642176",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "name": "ceph_lv0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "tags": {
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.cluster_name": "ceph",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.crush_device_class": "",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.encrypted": "0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.objectstore": "bluestore",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.osd_id": "0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.type": "block",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.vdo": "0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.with_tpm": "0"
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            },
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "type": "block",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "vg_name": "ceph_vg0"
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:        }
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:    ],
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:    "1": [
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:        {
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "devices": [
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "/dev/loop4"
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            ],
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_name": "ceph_lv1",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_size": "21470642176",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "name": "ceph_lv1",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "tags": {
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.cluster_name": "ceph",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.crush_device_class": "",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.encrypted": "0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.objectstore": "bluestore",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.osd_id": "1",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.type": "block",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.vdo": "0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.with_tpm": "0"
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            },
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "type": "block",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "vg_name": "ceph_vg1"
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:        }
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:    ],
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:    "2": [
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:        {
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "devices": [
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "/dev/loop5"
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            ],
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_name": "ceph_lv2",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_size": "21470642176",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "name": "ceph_lv2",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "tags": {
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.cluster_name": "ceph",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.crush_device_class": "",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.encrypted": "0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.objectstore": "bluestore",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.osd_id": "2",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.type": "block",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.vdo": "0",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:                "ceph.with_tpm": "0"
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            },
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "type": "block",
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:            "vg_name": "ceph_vg2"
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:        }
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]:    ]
Jan 21 09:20:19 np0005590528 xenodochial_haibt[253236]: }
Jan 21 09:20:19 np0005590528 systemd[1]: libpod-3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b.scope: Deactivated successfully.
Jan 21 09:20:19 np0005590528 podman[253219]: 2026-01-21 14:20:19.714858811 +0000 UTC m=+0.656140973 container died 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 09:20:19 np0005590528 systemd[1]: var-lib-containers-storage-overlay-aba8a4720be0ec1347715e2c64186796e2051fe3a6b34dd7101ad6dd4bd53c2c-merged.mount: Deactivated successfully.
Jan 21 09:20:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:20:20 np0005590528 podman[253219]: 2026-01-21 14:20:20.570301616 +0000 UTC m=+1.511583788 container remove 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:20:20 np0005590528 systemd[1]: libpod-conmon-3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b.scope: Deactivated successfully.
Jan 21 09:20:21 np0005590528 podman[253319]: 2026-01-21 14:20:21.024780929 +0000 UTC m=+0.029931892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:20:21 np0005590528 podman[253319]: 2026-01-21 14:20:21.355837135 +0000 UTC m=+0.360988098 container create c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 09:20:21 np0005590528 systemd[1]: Started libpod-conmon-c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c.scope.
Jan 21 09:20:21 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:20:21 np0005590528 podman[253319]: 2026-01-21 14:20:21.547656532 +0000 UTC m=+0.552807495 container init c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 09:20:21 np0005590528 podman[253319]: 2026-01-21 14:20:21.553596721 +0000 UTC m=+0.558747704 container start c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 09:20:21 np0005590528 affectionate_kalam[253335]: 167 167
Jan 21 09:20:21 np0005590528 systemd[1]: libpod-c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c.scope: Deactivated successfully.
Jan 21 09:20:21 np0005590528 podman[253319]: 2026-01-21 14:20:21.559212982 +0000 UTC m=+0.564363925 container attach c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:20:21 np0005590528 podman[253319]: 2026-01-21 14:20:21.559881619 +0000 UTC m=+0.565032562 container died c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 09:20:21 np0005590528 systemd[1]: var-lib-containers-storage-overlay-208a0d5b1996ec7f575c705e7d28ee9cc519ffec635058a6e1c119c4f8531fb4-merged.mount: Deactivated successfully.
Jan 21 09:20:22 np0005590528 podman[253319]: 2026-01-21 14:20:22.03439006 +0000 UTC m=+1.039540993 container remove c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:20:22 np0005590528 systemd[1]: libpod-conmon-c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c.scope: Deactivated successfully.
Jan 21 09:20:22 np0005590528 podman[253359]: 2026-01-21 14:20:22.177310244 +0000 UTC m=+0.020206934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:20:22 np0005590528 podman[253359]: 2026-01-21 14:20:22.389372986 +0000 UTC m=+0.232269686 container create 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 09:20:22 np0005590528 systemd[1]: Started libpod-conmon-415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9.scope.
Jan 21 09:20:22 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:20:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fabc52710222d1ba64dc0223f5408c9b880deea210c5cd0a6f1d263020d7e985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fabc52710222d1ba64dc0223f5408c9b880deea210c5cd0a6f1d263020d7e985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fabc52710222d1ba64dc0223f5408c9b880deea210c5cd0a6f1d263020d7e985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:22 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fabc52710222d1ba64dc0223f5408c9b880deea210c5cd0a6f1d263020d7e985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:20:22 np0005590528 podman[253359]: 2026-01-21 14:20:22.487993433 +0000 UTC m=+0.330890123 container init 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 09:20:22 np0005590528 podman[253359]: 2026-01-21 14:20:22.497316351 +0000 UTC m=+0.340213021 container start 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 09:20:22 np0005590528 podman[253359]: 2026-01-21 14:20:22.501184341 +0000 UTC m=+0.344081041 container attach 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:20:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:20:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:20:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2054083079' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:20:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:20:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2054083079' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:20:23 np0005590528 lvm[253454]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:20:23 np0005590528 lvm[253454]: VG ceph_vg1 finished
Jan 21 09:20:23 np0005590528 lvm[253453]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:20:23 np0005590528 lvm[253453]: VG ceph_vg0 finished
Jan 21 09:20:23 np0005590528 lvm[253456]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:20:23 np0005590528 lvm[253456]: VG ceph_vg2 finished
Jan 21 09:20:23 np0005590528 suspicious_liskov[253375]: {}
Jan 21 09:20:23 np0005590528 systemd[1]: libpod-415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9.scope: Deactivated successfully.
Jan 21 09:20:23 np0005590528 podman[253359]: 2026-01-21 14:20:23.342630138 +0000 UTC m=+1.185526808 container died 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 09:20:23 np0005590528 systemd[1]: libpod-415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9.scope: Consumed 1.338s CPU time.
Jan 21 09:20:23 np0005590528 systemd[1]: var-lib-containers-storage-overlay-fabc52710222d1ba64dc0223f5408c9b880deea210c5cd0a6f1d263020d7e985-merged.mount: Deactivated successfully.
Jan 21 09:20:23 np0005590528 podman[253359]: 2026-01-21 14:20:23.39953049 +0000 UTC m=+1.242427200 container remove 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:20:23 np0005590528 systemd[1]: libpod-conmon-415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9.scope: Deactivated successfully.
Jan 21 09:20:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:20:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:20:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:20:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:20:24 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:20:24 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:20:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:20:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:20:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:20:29 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:20:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 09:20:29 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/378a9e2b-830b-4331-9f8d-cddced43a09c/e838926d-ddbc-4be3-a09d-636b8eb3404d'.
Jan 21 09:20:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/378a9e2b-830b-4331-9f8d-cddced43a09c/.meta.tmp'
Jan 21 09:20:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/378a9e2b-830b-4331-9f8d-cddced43a09c/.meta.tmp' to config b'/volumes/_nogroup/378a9e2b-830b-4331-9f8d-cddced43a09c/.meta'
Jan 21 09:20:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 09:20:29 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "format": "json"}]: dispatch
Jan 21 09:20:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 09:20:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 09:20:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:20:29 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:20:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s wr, 0 op/s
Jan 21 09:20:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s wr, 0 op/s
Jan 21 09:20:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:20:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 09:20:33 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/05a1a08c-ad8e-48c0-9200-239aff8cbad0/a864419b-98a6-4b69-8083-22802883d427'.
Jan 21 09:20:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05a1a08c-ad8e-48c0-9200-239aff8cbad0/.meta.tmp'
Jan 21 09:20:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05a1a08c-ad8e-48c0-9200-239aff8cbad0/.meta.tmp' to config b'/volumes/_nogroup/05a1a08c-ad8e-48c0-9200-239aff8cbad0/.meta'
Jan 21 09:20:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 09:20:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "format": "json"}]: dispatch
Jan 21 09:20:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 09:20:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 09:20:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:20:33 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:20:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:20:33.911 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:20:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:20:33.913 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:20:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:20:33.913 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:20:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Jan 21 09:20:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Jan 21 09:20:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:20:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 09:20:37 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4f159ef8-5ae2-4a2e-8475-13a3dd691b3d/1fa83db7-b63e-4998-96ec-6ffd93b788e9'.
Jan 21 09:20:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4f159ef8-5ae2-4a2e-8475-13a3dd691b3d/.meta.tmp'
Jan 21 09:20:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4f159ef8-5ae2-4a2e-8475-13a3dd691b3d/.meta.tmp' to config b'/volumes/_nogroup/4f159ef8-5ae2-4a2e-8475-13a3dd691b3d/.meta'
Jan 21 09:20:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 09:20:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "format": "json"}]: dispatch
Jan 21 09:20:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 09:20:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 09:20:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:20:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:20:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Jan 21 09:20:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:20:39
Jan 21 09:20:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:20:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:20:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'volumes', 'images', 'default.rgw.log']
Jan 21 09:20:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:20:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s wr, 3 op/s
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:20:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:20:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:20:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 09:20:42 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c2b5535e-c671-412e-8a2f-000a21e98354/a31a2175-050a-460b-86dc-4703b9dd32ff'.
Jan 21 09:20:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c2b5535e-c671-412e-8a2f-000a21e98354/.meta.tmp'
Jan 21 09:20:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c2b5535e-c671-412e-8a2f-000a21e98354/.meta.tmp' to config b'/volumes/_nogroup/c2b5535e-c671-412e-8a2f-000a21e98354/.meta'
Jan 21 09:20:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 09:20:42 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "format": "json"}]: dispatch
Jan 21 09:20:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 09:20:42 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 09:20:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:20:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:20:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 2 op/s
Jan 21 09:20:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 2 op/s
Jan 21 09:20:46 np0005590528 podman[253496]: 2026-01-21 14:20:46.336421991 +0000 UTC m=+0.055981551 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 09:20:46 np0005590528 podman[253495]: 2026-01-21 14:20:46.36972989 +0000 UTC m=+0.092841973 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:20:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s wr, 3 op/s
Jan 21 09:20:46 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "format": "json"}]: dispatch
Jan 21 09:20:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c2b5535e-c671-412e-8a2f-000a21e98354, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:20:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c2b5535e-c671-412e-8a2f-000a21e98354, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:20:46 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:20:46.870+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2b5535e-c671-412e-8a2f-000a21e98354' of type subvolume
Jan 21 09:20:46 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2b5535e-c671-412e-8a2f-000a21e98354' of type subvolume
Jan 21 09:20:46 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "force": true, "format": "json"}]: dispatch
Jan 21 09:20:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 09:20:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c2b5535e-c671-412e-8a2f-000a21e98354'' moved to trashcan
Jan 21 09:20:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:20:46 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 09:20:47 np0005590528 nova_compute[239261]: 2026-01-21 14:20:47.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:20:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s wr, 2 op/s
Jan 21 09:20:49 np0005590528 nova_compute[239261]: 2026-01-21 14:20:49.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:20:49 np0005590528 nova_compute[239261]: 2026-01-21 14:20:49.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:20:49 np0005590528 nova_compute[239261]: 2026-01-21 14:20:49.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:20:49 np0005590528 nova_compute[239261]: 2026-01-21 14:20:49.743 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "format": "json"}]: dispatch
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:20:50 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:20:50.279+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4f159ef8-5ae2-4a2e-8475-13a3dd691b3d' of type subvolume
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4f159ef8-5ae2-4a2e-8475-13a3dd691b3d' of type subvolume
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "force": true, "format": "json"}]: dispatch
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4f159ef8-5ae2-4a2e-8475-13a3dd691b3d'' moved to trashcan
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 72 KiB/s wr, 4 op/s
Jan 21 09:20:50 np0005590528 nova_compute[239261]: 2026-01-21 14:20:50.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:20:50 np0005590528 nova_compute[239261]: 2026-01-21 14:20:50.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:20:50 np0005590528 nova_compute[239261]: 2026-01-21 14:20:50.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662144933880528 of space, bias 1.0, pg target 0.19986434801641584 quantized to 32 (current 32)
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004876885647310164 of space, bias 4.0, pg target 0.5852262776772197 quantized to 16 (current 16)
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:20:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:20:50 np0005590528 nova_compute[239261]: 2026-01-21 14:20:50.924 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:20:50 np0005590528 nova_compute[239261]: 2026-01-21 14:20:50.925 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:20:50 np0005590528 nova_compute[239261]: 2026-01-21 14:20:50.925 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:20:50 np0005590528 nova_compute[239261]: 2026-01-21 14:20:50.925 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:20:50 np0005590528 nova_compute[239261]: 2026-01-21 14:20:50.925 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:20:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:20:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530373979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:20:51 np0005590528 nova_compute[239261]: 2026-01-21 14:20:51.464 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:20:51 np0005590528 nova_compute[239261]: 2026-01-21 14:20:51.621 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:20:51 np0005590528 nova_compute[239261]: 2026-01-21 14:20:51.622 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5016MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:20:51 np0005590528 nova_compute[239261]: 2026-01-21 14:20:51.622 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:20:51 np0005590528 nova_compute[239261]: 2026-01-21 14:20:51.623 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:20:51 np0005590528 nova_compute[239261]: 2026-01-21 14:20:51.691 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:20:51 np0005590528 nova_compute[239261]: 2026-01-21 14:20:51.691 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:20:51 np0005590528 nova_compute[239261]: 2026-01-21 14:20:51.708 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:20:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 09:20:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:20:52 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817285755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:20:52 np0005590528 nova_compute[239261]: 2026-01-21 14:20:52.587 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.879s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:20:52 np0005590528 nova_compute[239261]: 2026-01-21 14:20:52.596 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:20:52 np0005590528 nova_compute[239261]: 2026-01-21 14:20:52.615 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:20:52 np0005590528 nova_compute[239261]: 2026-01-21 14:20:52.616 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:20:52 np0005590528 nova_compute[239261]: 2026-01-21 14:20:52.617 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:20:52 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:53 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "format": "json"}]: dispatch
Jan 21 09:20:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:20:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:20:53 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:20:53.659+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05a1a08c-ad8e-48c0-9200-239aff8cbad0' of type subvolume
Jan 21 09:20:53 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05a1a08c-ad8e-48c0-9200-239aff8cbad0' of type subvolume
Jan 21 09:20:53 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "force": true, "format": "json"}]: dispatch
Jan 21 09:20:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 09:20:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/05a1a08c-ad8e-48c0-9200-239aff8cbad0'' moved to trashcan
Jan 21 09:20:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:20:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 09:20:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 09:20:54 np0005590528 nova_compute[239261]: 2026-01-21 14:20:54.616 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:20:54 np0005590528 nova_compute[239261]: 2026-01-21 14:20:54.617 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:20:54 np0005590528 nova_compute[239261]: 2026-01-21 14:20:54.617 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:20:54 np0005590528 nova_compute[239261]: 2026-01-21 14:20:54.617 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:20:54 np0005590528 nova_compute[239261]: 2026-01-21 14:20:54.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:20:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 71 KiB/s wr, 5 op/s
Jan 21 09:20:57 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "format": "json"}]: dispatch
Jan 21 09:20:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:378a9e2b-830b-4331-9f8d-cddced43a09c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:20:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:378a9e2b-830b-4331-9f8d-cddced43a09c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:20:57 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:20:57.095+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '378a9e2b-830b-4331-9f8d-cddced43a09c' of type subvolume
Jan 21 09:20:57 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '378a9e2b-830b-4331-9f8d-cddced43a09c' of type subvolume
Jan 21 09:20:57 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "force": true, "format": "json"}]: dispatch
Jan 21 09:20:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 09:20:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/378a9e2b-830b-4331-9f8d-cddced43a09c'' moved to trashcan
Jan 21 09:20:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:20:57 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 09:20:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:20:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Jan 21 09:21:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 80 KiB/s wr, 6 op/s
Jan 21 09:21:01 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:21:01.933 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:21:01 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:21:01.935 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:21:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 09:21:02 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 09:21:04 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:21:04.940 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:21:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 09:21:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 26 KiB/s wr, 3 op/s
Jan 21 09:21:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 38 KiB/s wr, 3 op/s
Jan 21 09:21:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:21:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:21:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:21:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:21:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:21:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:21:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 09:21:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:12 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:21:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:21:12 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/8cb7ae86-bfd3-4a18-836f-c9c7d266cc44'.
Jan 21 09:21:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp'
Jan 21 09:21:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp' to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta'
Jan 21 09:21:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:21:12 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "format": "json"}]: dispatch
Jan 21 09:21:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:21:12 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:21:12 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:21:12 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:21:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 1 op/s
Jan 21 09:21:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143", "format": "json"}]: dispatch
Jan 21 09:21:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:21:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:21:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 09:21:17 np0005590528 podman[253584]: 2026-01-21 14:21:17.334225914 +0000 UTC m=+0.057292701 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 21 09:21:17 np0005590528 podman[253583]: 2026-01-21 14:21:17.36352382 +0000 UTC m=+0.086428033 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 21 09:21:17 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143", "target_sub_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, target_sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/7101b50f-5fdb-4753-a2d7-62f31ffe88aa'.
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp'
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp' to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta'
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 83049bda-88bb-4dcc-9f15-3d09e73d4771 for path b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75'
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp'
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp' to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta'
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, target_sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 09:21:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.067+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.067+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.067+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.067+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.067+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 1837d3d1-766d-46d2-bd38-bb850ab9ec75)
Jan 21 09:21:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.088+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.088+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.088+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.088+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.088+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 1837d3d1-766d-46d2-bd38-bb850ab9ec75) -- by 0 seconds
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp'
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp' to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta'
Jan 21 09:21:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 1 op/s
Jan 21 09:21:21 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.tnwklj(active, since 36m)
Jan 21 09:21:21 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:21.073+0000 7fc4f4303640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:21 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:21.073+0000 7fc4f4303640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:21 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:21.073+0000 7fc4f4303640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:21 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:21.073+0000 7fc4f4303640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:21 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:21.073+0000 7fc4f4303640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.snap/717877ed-ee59-4b6f-a8b8-a5e824a0e143/8cb7ae86-bfd3-4a18-836f-c9c7d266cc44' to b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/7101b50f-5fdb-4753-a2d7-62f31ffe88aa'
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp'
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp' to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta'
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] untracking 83049bda-88bb-4dcc-9f15-3d09e73d4771
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp'
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp' to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta'
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp'
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp' to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta'
Jan 21 09:21:21 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 1837d3d1-766d-46d2-bd38-bb850ab9ec75)
Jan 21 09:21:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Jan 21 09:21:22 np0005590528 ceph-mgr[75322]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 21 09:21:22 np0005590528 ceph-mgr[75322]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Jan 21 09:21:22 np0005590528 ceph-mgr[75322]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Jan 21 09:21:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Jan 21 09:21:22 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7fc5286a2bb0>
Jan 21 09:21:22 np0005590528 ceph-mgr[75322]: [progress INFO root] Writing back 19 completed events
Jan 21 09:21:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 09:21:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:21:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Jan 21 09:21:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:21:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344732762' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:21:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:21:22 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344732762' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:21:23 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:21:23 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.tnwklj(active, since 36m)
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:21:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 19 KiB/s wr, 2 op/s
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:21:24 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:21:25 np0005590528 podman[253806]: 2026-01-21 14:21:24.971843637 +0000 UTC m=+0.026960753 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:21:25 np0005590528 podman[253806]: 2026-01-21 14:21:25.719771106 +0000 UTC m=+0.774888252 container create ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 09:21:25 np0005590528 systemd[1]: Started libpod-conmon-ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5.scope.
Jan 21 09:21:25 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:21:25 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:21:25 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:21:25 np0005590528 podman[253806]: 2026-01-21 14:21:25.968845823 +0000 UTC m=+1.023962949 container init ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 09:21:25 np0005590528 podman[253806]: 2026-01-21 14:21:25.975616671 +0000 UTC m=+1.030733777 container start ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 09:21:25 np0005590528 elastic_kowalevski[253822]: 167 167
Jan 21 09:21:25 np0005590528 systemd[1]: libpod-ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5.scope: Deactivated successfully.
Jan 21 09:21:25 np0005590528 conmon[253822]: conmon ec769da8333c68ce5487 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5.scope/container/memory.events
Jan 21 09:21:25 np0005590528 podman[253806]: 2026-01-21 14:21:25.996231603 +0000 UTC m=+1.051348709 container attach ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 09:21:25 np0005590528 podman[253806]: 2026-01-21 14:21:25.997091094 +0000 UTC m=+1.052208200 container died ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 09:21:26 np0005590528 systemd[1]: var-lib-containers-storage-overlay-eabbefac1d69a04ed01c1a1ba0d99f23e60acfefca12c1c100ba6b6ffcb381ff-merged.mount: Deactivated successfully.
Jan 21 09:21:26 np0005590528 podman[253806]: 2026-01-21 14:21:26.26664268 +0000 UTC m=+1.321759786 container remove ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:21:26 np0005590528 systemd[1]: libpod-conmon-ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5.scope: Deactivated successfully.
Jan 21 09:21:26 np0005590528 podman[253847]: 2026-01-21 14:21:26.448835703 +0000 UTC m=+0.031363455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:21:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 6 op/s
Jan 21 09:21:26 np0005590528 podman[253847]: 2026-01-21 14:21:26.574078453 +0000 UTC m=+0.156606195 container create 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 09:21:26 np0005590528 systemd[1]: Started libpod-conmon-113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a.scope.
Jan 21 09:21:26 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:21:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:26 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:26 np0005590528 podman[253847]: 2026-01-21 14:21:26.739433992 +0000 UTC m=+0.321961794 container init 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:21:26 np0005590528 podman[253847]: 2026-01-21 14:21:26.747306776 +0000 UTC m=+0.329834508 container start 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 09:21:26 np0005590528 podman[253847]: 2026-01-21 14:21:26.750526051 +0000 UTC m=+0.333053803 container attach 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 09:21:27 np0005590528 jolly_torvalds[253863]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:21:27 np0005590528 jolly_torvalds[253863]: --> All data devices are unavailable
Jan 21 09:21:27 np0005590528 systemd[1]: libpod-113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a.scope: Deactivated successfully.
Jan 21 09:21:27 np0005590528 conmon[253863]: conmon 113798a1c35012c35eac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a.scope/container/memory.events
Jan 21 09:21:27 np0005590528 podman[253847]: 2026-01-21 14:21:27.230750127 +0000 UTC m=+0.813277889 container died 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:21:27 np0005590528 systemd[1]: var-lib-containers-storage-overlay-f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d-merged.mount: Deactivated successfully.
Jan 21 09:21:27 np0005590528 podman[253847]: 2026-01-21 14:21:27.2932478 +0000 UTC m=+0.875775532 container remove 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 21 09:21:27 np0005590528 systemd[1]: libpod-conmon-113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a.scope: Deactivated successfully.
Jan 21 09:21:27 np0005590528 podman[253959]: 2026-01-21 14:21:27.72844099 +0000 UTC m=+0.041382928 container create d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:21:27 np0005590528 systemd[1]: Started libpod-conmon-d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693.scope.
Jan 21 09:21:27 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:21:27 np0005590528 podman[253959]: 2026-01-21 14:21:27.710480461 +0000 UTC m=+0.023422419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:21:27 np0005590528 podman[253959]: 2026-01-21 14:21:27.812278322 +0000 UTC m=+0.125220270 container init d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 09:21:27 np0005590528 podman[253959]: 2026-01-21 14:21:27.820210118 +0000 UTC m=+0.133152056 container start d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 09:21:27 np0005590528 podman[253959]: 2026-01-21 14:21:27.824145561 +0000 UTC m=+0.137087519 container attach d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 09:21:27 np0005590528 clever_shtern[253975]: 167 167
Jan 21 09:21:27 np0005590528 systemd[1]: libpod-d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693.scope: Deactivated successfully.
Jan 21 09:21:27 np0005590528 podman[253959]: 2026-01-21 14:21:27.826339661 +0000 UTC m=+0.139281599 container died d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 09:21:27 np0005590528 systemd[1]: var-lib-containers-storage-overlay-261ef37cc57a32bf4c6d2a93580de36e39753cfbd63e0eb2bae553b038d9f4b7-merged.mount: Deactivated successfully.
Jan 21 09:21:27 np0005590528 podman[253959]: 2026-01-21 14:21:27.862822965 +0000 UTC m=+0.175764903 container remove d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:21:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:27 np0005590528 systemd[1]: libpod-conmon-d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693.scope: Deactivated successfully.
Jan 21 09:21:28 np0005590528 podman[253998]: 2026-01-21 14:21:28.021252483 +0000 UTC m=+0.038536323 container create 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 09:21:28 np0005590528 systemd[1]: Started libpod-conmon-20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8.scope.
Jan 21 09:21:28 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:21:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370ccf9210e66484ce7a5d61c2779b5e5dd036fc763113a230a84d181793cae2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370ccf9210e66484ce7a5d61c2779b5e5dd036fc763113a230a84d181793cae2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370ccf9210e66484ce7a5d61c2779b5e5dd036fc763113a230a84d181793cae2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:28 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370ccf9210e66484ce7a5d61c2779b5e5dd036fc763113a230a84d181793cae2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:28 np0005590528 podman[253998]: 2026-01-21 14:21:28.004726596 +0000 UTC m=+0.022010456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:21:28 np0005590528 podman[253998]: 2026-01-21 14:21:28.104409608 +0000 UTC m=+0.121693478 container init 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 09:21:28 np0005590528 podman[253998]: 2026-01-21 14:21:28.111230158 +0000 UTC m=+0.128513998 container start 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 09:21:28 np0005590528 podman[253998]: 2026-01-21 14:21:28.114915173 +0000 UTC m=+0.132199023 container attach 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]: {
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:    "0": [
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:        {
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "devices": [
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "/dev/loop3"
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            ],
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_name": "ceph_lv0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_size": "21470642176",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "name": "ceph_lv0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "tags": {
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.cluster_name": "ceph",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.crush_device_class": "",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.encrypted": "0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.objectstore": "bluestore",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.osd_id": "0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.type": "block",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.vdo": "0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.with_tpm": "0"
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            },
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "type": "block",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "vg_name": "ceph_vg0"
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:        }
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:    ],
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:    "1": [
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:        {
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "devices": [
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "/dev/loop4"
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            ],
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_name": "ceph_lv1",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_size": "21470642176",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "name": "ceph_lv1",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "tags": {
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.cluster_name": "ceph",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.crush_device_class": "",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.encrypted": "0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.objectstore": "bluestore",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.osd_id": "1",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.type": "block",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.vdo": "0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.with_tpm": "0"
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            },
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "type": "block",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "vg_name": "ceph_vg1"
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:        }
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:    ],
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:    "2": [
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:        {
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "devices": [
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "/dev/loop5"
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            ],
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_name": "ceph_lv2",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_size": "21470642176",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "name": "ceph_lv2",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "tags": {
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.cluster_name": "ceph",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.crush_device_class": "",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.encrypted": "0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.objectstore": "bluestore",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.osd_id": "2",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.type": "block",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.vdo": "0",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:                "ceph.with_tpm": "0"
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            },
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "type": "block",
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:            "vg_name": "ceph_vg2"
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:        }
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]:    ]
Jan 21 09:21:28 np0005590528 strange_heisenberg[254015]: }
Jan 21 09:21:28 np0005590528 systemd[1]: libpod-20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8.scope: Deactivated successfully.
Jan 21 09:21:28 np0005590528 podman[253998]: 2026-01-21 14:21:28.402016911 +0000 UTC m=+0.419300761 container died 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:21:28 np0005590528 systemd[1]: var-lib-containers-storage-overlay-370ccf9210e66484ce7a5d61c2779b5e5dd036fc763113a230a84d181793cae2-merged.mount: Deactivated successfully.
Jan 21 09:21:28 np0005590528 podman[253998]: 2026-01-21 14:21:28.441801203 +0000 UTC m=+0.459085043 container remove 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 09:21:28 np0005590528 systemd[1]: libpod-conmon-20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8.scope: Deactivated successfully.
Jan 21 09:21:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 48 KiB/s wr, 5 op/s
Jan 21 09:21:28 np0005590528 podman[254098]: 2026-01-21 14:21:28.99005799 +0000 UTC m=+0.114854068 container create 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 21 09:21:28 np0005590528 podman[254098]: 2026-01-21 14:21:28.897137026 +0000 UTC m=+0.021933134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:21:29 np0005590528 systemd[1]: Started libpod-conmon-4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94.scope.
Jan 21 09:21:29 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:21:29 np0005590528 podman[254098]: 2026-01-21 14:21:29.091038713 +0000 UTC m=+0.215834791 container init 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:21:29 np0005590528 podman[254098]: 2026-01-21 14:21:29.099316856 +0000 UTC m=+0.224112934 container start 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:21:29 np0005590528 jolly_mahavira[254115]: 167 167
Jan 21 09:21:29 np0005590528 systemd[1]: libpod-4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94.scope: Deactivated successfully.
Jan 21 09:21:29 np0005590528 podman[254098]: 2026-01-21 14:21:29.105099992 +0000 UTC m=+0.229896090 container attach 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:21:29 np0005590528 podman[254098]: 2026-01-21 14:21:29.105472231 +0000 UTC m=+0.230268309 container died 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:21:29 np0005590528 systemd[1]: var-lib-containers-storage-overlay-13b249de87c300435c9248158d9a7af734e5f94184ea1ea19998b6265dcb7ce5-merged.mount: Deactivated successfully.
Jan 21 09:21:29 np0005590528 podman[254098]: 2026-01-21 14:21:29.145354364 +0000 UTC m=+0.270150482 container remove 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 09:21:29 np0005590528 systemd[1]: libpod-conmon-4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94.scope: Deactivated successfully.
Jan 21 09:21:29 np0005590528 podman[254140]: 2026-01-21 14:21:29.309066144 +0000 UTC m=+0.044050251 container create bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 09:21:29 np0005590528 systemd[1]: Started libpod-conmon-bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7.scope.
Jan 21 09:21:29 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:21:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3526866c9da0426b9a4483b0c33e2779abf591f3486bac47f8648341ef83f052/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3526866c9da0426b9a4483b0c33e2779abf591f3486bac47f8648341ef83f052/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3526866c9da0426b9a4483b0c33e2779abf591f3486bac47f8648341ef83f052/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:29 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3526866c9da0426b9a4483b0c33e2779abf591f3486bac47f8648341ef83f052/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:21:29 np0005590528 podman[254140]: 2026-01-21 14:21:29.288974024 +0000 UTC m=+0.023958211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:21:29 np0005590528 podman[254140]: 2026-01-21 14:21:29.40594977 +0000 UTC m=+0.140933907 container init bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 09:21:29 np0005590528 podman[254140]: 2026-01-21 14:21:29.411678806 +0000 UTC m=+0.146662913 container start bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:21:29 np0005590528 podman[254140]: 2026-01-21 14:21:29.414923201 +0000 UTC m=+0.149907328 container attach bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 09:21:30 np0005590528 lvm[254235]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:21:30 np0005590528 lvm[254234]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:21:30 np0005590528 lvm[254234]: VG ceph_vg0 finished
Jan 21 09:21:30 np0005590528 lvm[254235]: VG ceph_vg1 finished
Jan 21 09:21:30 np0005590528 lvm[254237]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:21:30 np0005590528 lvm[254237]: VG ceph_vg2 finished
Jan 21 09:21:30 np0005590528 lvm[254239]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:21:30 np0005590528 lvm[254239]: VG ceph_vg2 finished
Jan 21 09:21:30 np0005590528 hungry_joliot[254156]: {}
Jan 21 09:21:30 np0005590528 lvm[254241]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:21:30 np0005590528 lvm[254241]: VG ceph_vg2 finished
Jan 21 09:21:30 np0005590528 systemd[1]: libpod-bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7.scope: Deactivated successfully.
Jan 21 09:21:30 np0005590528 systemd[1]: libpod-bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7.scope: Consumed 1.334s CPU time.
Jan 21 09:21:30 np0005590528 podman[254140]: 2026-01-21 14:21:30.235274355 +0000 UTC m=+0.970258482 container died bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:21:30 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3526866c9da0426b9a4483b0c33e2779abf591f3486bac47f8648341ef83f052-merged.mount: Deactivated successfully.
Jan 21 09:21:30 np0005590528 podman[254140]: 2026-01-21 14:21:30.275367032 +0000 UTC m=+1.010351139 container remove bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 09:21:30 np0005590528 systemd[1]: libpod-conmon-bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7.scope: Deactivated successfully.
Jan 21 09:21:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:21:30 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:21:30 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:21:30 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:21:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 50 KiB/s wr, 6 op/s
Jan 21 09:21:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:21:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:21:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 40 KiB/s wr, 5 op/s
Jan 21 09:21:32 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:21:33.913 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:21:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:21:33.914 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:21:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:21:33.914 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:21:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 40 KiB/s wr, 5 op/s
Jan 21 09:21:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 40 KiB/s wr, 4 op/s
Jan 21 09:21:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 09:21:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:21:39
Jan 21 09:21:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:21:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:21:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'vms', 'images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.mgr']
Jan 21 09:21:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:21:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:21:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:21:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:21:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:21:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:21:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:48 np0005590528 podman[254280]: 2026-01-21 14:21:48.339331636 +0000 UTC m=+0.057957407 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 21 09:21:48 np0005590528 podman[254279]: 2026-01-21 14:21:48.390495643 +0000 UTC m=+0.107424115 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 21 09:21:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:21:49 np0005590528 nova_compute[239261]: 2026-01-21 14:21:49.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:21:49 np0005590528 nova_compute[239261]: 2026-01-21 14:21:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:21:49 np0005590528 nova_compute[239261]: 2026-01-21 14:21:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:21:49 np0005590528 nova_compute[239261]: 2026-01-21 14:21:49.743 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:21:49 np0005590528 nova_compute[239261]: 2026-01-21 14:21:49.743 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:21:50 np0005590528 nova_compute[239261]: 2026-01-21 14:21:50.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:21:50 np0005590528 nova_compute[239261]: 2026-01-21 14:21:50.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662144933880528 of space, bias 1.0, pg target 0.19986434801641584 quantized to 32 (current 32)
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.000509297280306306 of space, bias 4.0, pg target 0.6111567363675672 quantized to 16 (current 16)
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:21:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:21:51 np0005590528 nova_compute[239261]: 2026-01-21 14:21:51.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:21:51 np0005590528 nova_compute[239261]: 2026-01-21 14:21:51.809 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:21:51 np0005590528 nova_compute[239261]: 2026-01-21 14:21:51.810 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:21:51 np0005590528 nova_compute[239261]: 2026-01-21 14:21:51.810 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:21:51 np0005590528 nova_compute[239261]: 2026-01-21 14:21:51.810 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:21:51 np0005590528 nova_compute[239261]: 2026-01-21 14:21:51.811 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:21:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:21:53 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337496685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:21:53 np0005590528 nova_compute[239261]: 2026-01-21 14:21:52.310 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:21:53 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:21:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:53 np0005590528 nova_compute[239261]: 2026-01-21 14:21:53.525 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:21:53 np0005590528 nova_compute[239261]: 2026-01-21 14:21:53.526 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5003MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:21:53 np0005590528 nova_compute[239261]: 2026-01-21 14:21:53.527 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:21:53 np0005590528 nova_compute[239261]: 2026-01-21 14:21:53.527 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:21:53 np0005590528 nova_compute[239261]: 2026-01-21 14:21:53.596 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:21:53 np0005590528 nova_compute[239261]: 2026-01-21 14:21:53.596 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:21:53 np0005590528 nova_compute[239261]: 2026-01-21 14:21:53.622 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:21:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:21:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159399189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:21:54 np0005590528 nova_compute[239261]: 2026-01-21 14:21:54.131 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:21:54 np0005590528 nova_compute[239261]: 2026-01-21 14:21:54.140 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:21:54 np0005590528 nova_compute[239261]: 2026-01-21 14:21:54.160 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:21:54 np0005590528 nova_compute[239261]: 2026-01-21 14:21:54.163 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:21:54 np0005590528 nova_compute[239261]: 2026-01-21 14:21:54.163 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:21:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:21:56 np0005590528 nova_compute[239261]: 2026-01-21 14:21:56.159 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:21:56 np0005590528 nova_compute[239261]: 2026-01-21 14:21:56.177 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:21:56 np0005590528 nova_compute[239261]: 2026-01-21 14:21:56.178 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:21:56 np0005590528 nova_compute[239261]: 2026-01-21 14:21:56.178 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:21:56 np0005590528 nova_compute[239261]: 2026-01-21 14:21:56.178 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:21:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:21:56 np0005590528 nova_compute[239261]: 2026-01-21 14:21:56.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:21:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:21:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:22:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:22:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:22:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:03 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 09:22:03 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.153152) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326153227, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2453, "num_deletes": 507, "total_data_size": 3556250, "memory_usage": 3624616, "flush_reason": "Manual Compaction"}
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326409268, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3482968, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26144, "largest_seqno": 28596, "table_properties": {"data_size": 3472240, "index_size": 6390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 26695, "raw_average_key_size": 20, "raw_value_size": 3448236, "raw_average_value_size": 2654, "num_data_blocks": 282, "num_entries": 1299, "num_filter_entries": 1299, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769005127, "oldest_key_time": 1769005127, "file_creation_time": 1769005326, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 256196 microseconds, and 9518 cpu microseconds.
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.409352) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3482968 bytes OK
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.409386) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.412287) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.412314) EVENT_LOG_v1 {"time_micros": 1769005326412307, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.412343) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3544728, prev total WAL file size 3544728, number of live WAL files 2.
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.413693) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3401KB)], [59(10MB)]
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326413776, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 14155264, "oldest_snapshot_seqno": -1}
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5819 keys, 9104765 bytes, temperature: kUnknown
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326531672, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9104765, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9065027, "index_size": 24076, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 145410, "raw_average_key_size": 24, "raw_value_size": 8959881, "raw_average_value_size": 1539, "num_data_blocks": 991, "num_entries": 5819, "num_filter_entries": 5819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769005326, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.532032) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9104765 bytes
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.533870) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.0 rd, 77.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.7) write-amplify(2.6) OK, records in: 6852, records dropped: 1033 output_compression: NoCompression
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.533891) EVENT_LOG_v1 {"time_micros": 1769005326533881, "job": 32, "event": "compaction_finished", "compaction_time_micros": 117997, "compaction_time_cpu_micros": 28274, "output_level": 6, "num_output_files": 1, "total_output_size": 9104765, "num_input_records": 6852, "num_output_records": 5819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326534604, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326536774, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.413518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.536924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.536930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.536932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.536934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.536936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:22:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 21 09:22:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:06 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 09:22:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 09:22:06 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:22:06 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:22:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 21 09:22:10 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:22:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 09:22:10 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5be10ce2-edd5-48e2-9745-1baad2e01576/2bcde8f3-ae45-419c-921c-04b7d67afb7e'.
Jan 21 09:22:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5be10ce2-edd5-48e2-9745-1baad2e01576/.meta.tmp'
Jan 21 09:22:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5be10ce2-edd5-48e2-9745-1baad2e01576/.meta.tmp' to config b'/volumes/_nogroup/5be10ce2-edd5-48e2-9745-1baad2e01576/.meta'
Jan 21 09:22:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 09:22:10 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "format": "json"}]: dispatch
Jan 21 09:22:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 09:22:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 09:22:10 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:22:10 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:22:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.6 KiB/s wr, 0 op/s
Jan 21 09:22:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:22:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc547f763d0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50ada2ca0>)]
Jan 21 09:22:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:22:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:22:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:22:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:22:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:22:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:22:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.6 KiB/s wr, 0 op/s
Jan 21 09:22:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:13 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.tnwklj(active, since 37m)
Jan 21 09:22:14 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 21 09:22:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 09:22:14 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 09:22:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.6 KiB/s wr, 0 op/s
Jan 21 09:22:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 22 KiB/s wr, 1 op/s
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "format": "json"}]: dispatch
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5be10ce2-edd5-48e2-9745-1baad2e01576, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5be10ce2-edd5-48e2-9745-1baad2e01576, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5be10ce2-edd5-48e2-9745-1baad2e01576' of type subvolume
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.853+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5be10ce2-edd5-48e2-9745-1baad2e01576' of type subvolume
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "force": true, "format": "json"}]: dispatch
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5be10ce2-edd5-48e2-9745-1baad2e01576'' moved to trashcan
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.870+0000 7fc518659640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.870+0000 7fc518659640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.870+0000 7fc518659640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.870+0000 7fc518659640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.870+0000 7fc518659640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.893+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.893+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.893+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.893+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.893+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:17 np0005590528 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 09:22:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 22 KiB/s wr, 1 op/s
Jan 21 09:22:19 np0005590528 podman[254392]: 2026-01-21 14:22:19.333710115 +0000 UTC m=+0.057632503 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 09:22:19 np0005590528 podman[254391]: 2026-01-21 14:22:19.360899626 +0000 UTC m=+0.081780699 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 21 09:22:19 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.tnwklj(active, since 37m)
Jan 21 09:22:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 2 op/s
Jan 21 09:22:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s wr, 2 op/s
Jan 21 09:22:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:22:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3436247533' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:22:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:22:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3436247533' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:22:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s wr, 2 op/s
Jan 21 09:22:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 3 op/s
Jan 21 09:22:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:22:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 09:22:27 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5b257f1d-b61f-419d-bc85-c380d554748f/49e6749a-b814-4d79-b2c3-87dadc89263e'.
Jan 21 09:22:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5b257f1d-b61f-419d-bc85-c380d554748f/.meta.tmp'
Jan 21 09:22:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5b257f1d-b61f-419d-bc85-c380d554748f/.meta.tmp' to config b'/volumes/_nogroup/5b257f1d-b61f-419d-bc85-c380d554748f/.meta'
Jan 21 09:22:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 09:22:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "format": "json"}]: dispatch
Jan 21 09:22:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 09:22:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 09:22:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:22:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:22:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 25 KiB/s wr, 2 op/s
Jan 21 09:22:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 09:22:31 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Jan 21 09:22:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:22:31 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:22:31 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:22:32 np0005590528 podman[254578]: 2026-01-21 14:22:32.05379886 +0000 UTC m=+0.046420500 container create 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:22:32 np0005590528 systemd[1]: Started libpod-conmon-9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6.scope.
Jan 21 09:22:32 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:22:32 np0005590528 podman[254578]: 2026-01-21 14:22:32.030961755 +0000 UTC m=+0.023583415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:22:32 np0005590528 podman[254578]: 2026-01-21 14:22:32.157309787 +0000 UTC m=+0.149931487 container init 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:22:32 np0005590528 podman[254578]: 2026-01-21 14:22:32.171975714 +0000 UTC m=+0.164597354 container start 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:22:32 np0005590528 inspiring_snyder[254594]: 167 167
Jan 21 09:22:32 np0005590528 systemd[1]: libpod-9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6.scope: Deactivated successfully.
Jan 21 09:22:32 np0005590528 conmon[254594]: conmon 9e8bb229b0ff2296a83a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6.scope/container/memory.events
Jan 21 09:22:32 np0005590528 podman[254578]: 2026-01-21 14:22:32.196294365 +0000 UTC m=+0.188916025 container attach 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 09:22:32 np0005590528 podman[254578]: 2026-01-21 14:22:32.197772051 +0000 UTC m=+0.190393701 container died 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:22:32 np0005590528 systemd[1]: var-lib-containers-storage-overlay-550644e1a7b25e2e2a34467954cc5c59aa4d1d8d9d973cc8f2830a6559703d32-merged.mount: Deactivated successfully.
Jan 21 09:22:32 np0005590528 podman[254578]: 2026-01-21 14:22:32.240811707 +0000 UTC m=+0.233433347 container remove 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:22:32 np0005590528 systemd[1]: libpod-conmon-9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6.scope: Deactivated successfully.
Jan 21 09:22:32 np0005590528 podman[254618]: 2026-01-21 14:22:32.447168474 +0000 UTC m=+0.069146811 container create 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 09:22:32 np0005590528 systemd[1]: Started libpod-conmon-2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08.scope.
Jan 21 09:22:32 np0005590528 podman[254618]: 2026-01-21 14:22:32.404000616 +0000 UTC m=+0.025978963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:22:32 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:22:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:32 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:32 np0005590528 podman[254618]: 2026-01-21 14:22:32.541358555 +0000 UTC m=+0.163336902 container init 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 09:22:32 np0005590528 podman[254618]: 2026-01-21 14:22:32.549340869 +0000 UTC m=+0.171319196 container start 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 09:22:32 np0005590528 podman[254618]: 2026-01-21 14:22:32.553580832 +0000 UTC m=+0.175559179 container attach 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:22:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 23 KiB/s wr, 2 op/s
Jan 21 09:22:33 np0005590528 silly_diffie[254635]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:22:33 np0005590528 silly_diffie[254635]: --> All data devices are unavailable
Jan 21 09:22:33 np0005590528 systemd[1]: libpod-2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08.scope: Deactivated successfully.
Jan 21 09:22:33 np0005590528 podman[254618]: 2026-01-21 14:22:33.033254465 +0000 UTC m=+0.655232792 container died 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:22:33 np0005590528 systemd[1]: var-lib-containers-storage-overlay-efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868-merged.mount: Deactivated successfully.
Jan 21 09:22:33 np0005590528 podman[254618]: 2026-01-21 14:22:33.095736645 +0000 UTC m=+0.717715012 container remove 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:22:33 np0005590528 systemd[1]: libpod-conmon-2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08.scope: Deactivated successfully.
Jan 21 09:22:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:33 np0005590528 podman[254730]: 2026-01-21 14:22:33.5647796 +0000 UTC m=+0.040488406 container create 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 09:22:33 np0005590528 systemd[1]: Started libpod-conmon-90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076.scope.
Jan 21 09:22:33 np0005590528 podman[254730]: 2026-01-21 14:22:33.545248055 +0000 UTC m=+0.020956881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:22:33 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:22:33 np0005590528 podman[254730]: 2026-01-21 14:22:33.678254899 +0000 UTC m=+0.153963725 container init 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 09:22:33 np0005590528 podman[254730]: 2026-01-21 14:22:33.689173723 +0000 UTC m=+0.164882569 container start 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:22:33 np0005590528 focused_poitras[254746]: 167 167
Jan 21 09:22:33 np0005590528 systemd[1]: libpod-90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076.scope: Deactivated successfully.
Jan 21 09:22:33 np0005590528 podman[254730]: 2026-01-21 14:22:33.704905227 +0000 UTC m=+0.180614073 container attach 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 09:22:33 np0005590528 podman[254730]: 2026-01-21 14:22:33.705615743 +0000 UTC m=+0.181324559 container died 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:22:33 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b520f474f61d91cb6bc7d038e744ae05cf03bd03a78cc9db83e62d2e214880a5-merged.mount: Deactivated successfully.
Jan 21 09:22:33 np0005590528 podman[254730]: 2026-01-21 14:22:33.766353471 +0000 UTC m=+0.242062287 container remove 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:22:33 np0005590528 systemd[1]: libpod-conmon-90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076.scope: Deactivated successfully.
Jan 21 09:22:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:22:33.914 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:22:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:22:33.915 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:22:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:22:33.915 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:22:33 np0005590528 podman[254768]: 2026-01-21 14:22:33.973035466 +0000 UTC m=+0.048040049 container create 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 09:22:34 np0005590528 systemd[1]: Started libpod-conmon-06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83.scope.
Jan 21 09:22:34 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:22:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67db0ca208345a1abf484853214f1d07ea054d34d5aaccd970944365378a31e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67db0ca208345a1abf484853214f1d07ea054d34d5aaccd970944365378a31e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67db0ca208345a1abf484853214f1d07ea054d34d5aaccd970944365378a31e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:34 np0005590528 podman[254768]: 2026-01-21 14:22:33.950686542 +0000 UTC m=+0.025691115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:22:34 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67db0ca208345a1abf484853214f1d07ea054d34d5aaccd970944365378a31e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:34 np0005590528 podman[254768]: 2026-01-21 14:22:34.072925485 +0000 UTC m=+0.147930058 container init 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 09:22:34 np0005590528 podman[254768]: 2026-01-21 14:22:34.080160961 +0000 UTC m=+0.155165514 container start 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 09:22:34 np0005590528 podman[254768]: 2026-01-21 14:22:34.085116221 +0000 UTC m=+0.160120774 container attach 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]: {
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:    "0": [
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:        {
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "devices": [
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "/dev/loop3"
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            ],
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_name": "ceph_lv0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_size": "21470642176",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "name": "ceph_lv0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "tags": {
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.cluster_name": "ceph",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.crush_device_class": "",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.encrypted": "0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.objectstore": "bluestore",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.osd_id": "0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.type": "block",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.vdo": "0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.with_tpm": "0"
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            },
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "type": "block",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "vg_name": "ceph_vg0"
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:        }
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:    ],
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:    "1": [
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:        {
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "devices": [
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "/dev/loop4"
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            ],
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_name": "ceph_lv1",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_size": "21470642176",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "name": "ceph_lv1",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "tags": {
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.cluster_name": "ceph",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.crush_device_class": "",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.encrypted": "0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.objectstore": "bluestore",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.osd_id": "1",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.type": "block",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.vdo": "0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.with_tpm": "0"
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            },
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "type": "block",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "vg_name": "ceph_vg1"
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:        }
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:    ],
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:    "2": [
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:        {
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "devices": [
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "/dev/loop5"
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            ],
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_name": "ceph_lv2",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_size": "21470642176",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "name": "ceph_lv2",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "tags": {
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.cluster_name": "ceph",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.crush_device_class": "",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.encrypted": "0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.objectstore": "bluestore",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.osd_id": "2",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.type": "block",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.vdo": "0",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:                "ceph.with_tpm": "0"
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            },
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "type": "block",
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:            "vg_name": "ceph_vg2"
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:        }
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]:    ]
Jan 21 09:22:34 np0005590528 reverent_volhard[254784]: }
Jan 21 09:22:34 np0005590528 systemd[1]: libpod-06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83.scope: Deactivated successfully.
Jan 21 09:22:34 np0005590528 podman[254768]: 2026-01-21 14:22:34.381670922 +0000 UTC m=+0.456675475 container died 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 09:22:34 np0005590528 systemd[1]: var-lib-containers-storage-overlay-f67db0ca208345a1abf484853214f1d07ea054d34d5aaccd970944365378a31e-merged.mount: Deactivated successfully.
Jan 21 09:22:34 np0005590528 podman[254768]: 2026-01-21 14:22:34.472286055 +0000 UTC m=+0.547290608 container remove 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:22:34 np0005590528 systemd[1]: libpod-conmon-06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83.scope: Deactivated successfully.
Jan 21 09:22:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "format": "json"}]: dispatch
Jan 21 09:22:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5b257f1d-b61f-419d-bc85-c380d554748f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5b257f1d-b61f-419d-bc85-c380d554748f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:34 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b257f1d-b61f-419d-bc85-c380d554748f' of type subvolume
Jan 21 09:22:34 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:34.572+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b257f1d-b61f-419d-bc85-c380d554748f' of type subvolume
Jan 21 09:22:34 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "force": true, "format": "json"}]: dispatch
Jan 21 09:22:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 09:22:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5b257f1d-b61f-419d-bc85-c380d554748f'' moved to trashcan
Jan 21 09:22:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 23 KiB/s wr, 2 op/s
Jan 21 09:22:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:22:34 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 09:22:34 np0005590528 podman[254869]: 2026-01-21 14:22:34.970081179 +0000 UTC m=+0.035050064 container create f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:22:35 np0005590528 systemd[1]: Started libpod-conmon-f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d.scope.
Jan 21 09:22:35 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:22:35 np0005590528 podman[254869]: 2026-01-21 14:22:34.955381272 +0000 UTC m=+0.020350177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:22:35 np0005590528 podman[254869]: 2026-01-21 14:22:35.059033432 +0000 UTC m=+0.124002337 container init f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 09:22:35 np0005590528 podman[254869]: 2026-01-21 14:22:35.066547654 +0000 UTC m=+0.131516549 container start f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:22:35 np0005590528 podman[254869]: 2026-01-21 14:22:35.069876186 +0000 UTC m=+0.134845091 container attach f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 09:22:35 np0005590528 epic_hopper[254886]: 167 167
Jan 21 09:22:35 np0005590528 systemd[1]: libpod-f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d.scope: Deactivated successfully.
Jan 21 09:22:35 np0005590528 podman[254869]: 2026-01-21 14:22:35.072064438 +0000 UTC m=+0.137033323 container died f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 09:22:35 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d87bcfc5069ba9af18865b7c51e87fb4818088fc1ba55839f799594c76385c00-merged.mount: Deactivated successfully.
Jan 21 09:22:35 np0005590528 podman[254869]: 2026-01-21 14:22:35.113735202 +0000 UTC m=+0.178704087 container remove f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 09:22:35 np0005590528 systemd[1]: libpod-conmon-f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d.scope: Deactivated successfully.
Jan 21 09:22:35 np0005590528 podman[254910]: 2026-01-21 14:22:35.292228081 +0000 UTC m=+0.049005522 container create e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 09:22:35 np0005590528 systemd[1]: Started libpod-conmon-e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e.scope.
Jan 21 09:22:35 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:22:35 np0005590528 podman[254910]: 2026-01-21 14:22:35.270085334 +0000 UTC m=+0.026862865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:22:35 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82466a8c95dcb541b96ae141a5491505d107b74ecb99ec13b86591f8ac10036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:35 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82466a8c95dcb541b96ae141a5491505d107b74ecb99ec13b86591f8ac10036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:35 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82466a8c95dcb541b96ae141a5491505d107b74ecb99ec13b86591f8ac10036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:35 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82466a8c95dcb541b96ae141a5491505d107b74ecb99ec13b86591f8ac10036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:22:35 np0005590528 podman[254910]: 2026-01-21 14:22:35.379402492 +0000 UTC m=+0.136179963 container init e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:22:35 np0005590528 podman[254910]: 2026-01-21 14:22:35.385114431 +0000 UTC m=+0.141891872 container start e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 09:22:35 np0005590528 podman[254910]: 2026-01-21 14:22:35.388543053 +0000 UTC m=+0.145320494 container attach e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:22:36 np0005590528 lvm[255003]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:22:36 np0005590528 lvm[255005]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:22:36 np0005590528 lvm[255003]: VG ceph_vg0 finished
Jan 21 09:22:36 np0005590528 lvm[255005]: VG ceph_vg1 finished
Jan 21 09:22:36 np0005590528 lvm[255007]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:22:36 np0005590528 lvm[255007]: VG ceph_vg2 finished
Jan 21 09:22:36 np0005590528 lvm[255008]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:22:36 np0005590528 lvm[255008]: VG ceph_vg1 finished
Jan 21 09:22:36 np0005590528 condescending_feynman[254926]: {}
Jan 21 09:22:36 np0005590528 systemd[1]: libpod-e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e.scope: Deactivated successfully.
Jan 21 09:22:36 np0005590528 systemd[1]: libpod-e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e.scope: Consumed 1.466s CPU time.
Jan 21 09:22:36 np0005590528 podman[254910]: 2026-01-21 14:22:36.279079947 +0000 UTC m=+1.035857408 container died e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:22:36 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c82466a8c95dcb541b96ae141a5491505d107b74ecb99ec13b86591f8ac10036-merged.mount: Deactivated successfully.
Jan 21 09:22:36 np0005590528 podman[254910]: 2026-01-21 14:22:36.328743065 +0000 UTC m=+1.085520506 container remove e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 09:22:36 np0005590528 systemd[1]: libpod-conmon-e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e.scope: Deactivated successfully.
Jan 21 09:22:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:22:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:22:36 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:22:36 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:22:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 2 op/s
Jan 21 09:22:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:22:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:22:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 1 op/s
Jan 21 09:22:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:22:39
Jan 21 09:22:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:22:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:22:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.meta', 'default.rgw.log']
Jan 21 09:22:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:22:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 55 KiB/s wr, 3 op/s
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc5063dedf0>)]
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50adb7b50>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50a4a3850>)]
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:22:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:22:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:22:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 2 op/s
Jan 21 09:22:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:43 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.tnwklj(active, since 38m)
Jan 21 09:22:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 3 op/s
Jan 21 09:22:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:22:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 09:22:45 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c251ad3b-9d06-4c01-a543-f9720f7a74a6/20533bf2-9642-4ef1-8d58-d49be22aa6ba'.
Jan 21 09:22:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c251ad3b-9d06-4c01-a543-f9720f7a74a6/.meta.tmp'
Jan 21 09:22:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c251ad3b-9d06-4c01-a543-f9720f7a74a6/.meta.tmp' to config b'/volumes/_nogroup/c251ad3b-9d06-4c01-a543-f9720f7a74a6/.meta'
Jan 21 09:22:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 09:22:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "format": "json"}]: dispatch
Jan 21 09:22:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 09:22:45 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 09:22:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:22:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:22:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 3 op/s
Jan 21 09:22:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 27 KiB/s wr, 2 op/s
Jan 21 09:22:48 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "format": "json"}]: dispatch
Jan 21 09:22:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:48 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c251ad3b-9d06-4c01-a543-f9720f7a74a6' of type subvolume
Jan 21 09:22:48 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:48.842+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c251ad3b-9d06-4c01-a543-f9720f7a74a6' of type subvolume
Jan 21 09:22:48 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "force": true, "format": "json"}]: dispatch
Jan 21 09:22:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 09:22:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c251ad3b-9d06-4c01-a543-f9720f7a74a6'' moved to trashcan
Jan 21 09:22:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:22:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 09:22:49 np0005590528 nova_compute[239261]: 2026-01-21 14:22:49.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:22:49 np0005590528 nova_compute[239261]: 2026-01-21 14:22:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:22:49 np0005590528 nova_compute[239261]: 2026-01-21 14:22:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:22:49 np0005590528 nova_compute[239261]: 2026-01-21 14:22:49.907 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:22:50 np0005590528 podman[255050]: 2026-01-21 14:22:50.3324485 +0000 UTC m=+0.050512559 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:22:50 np0005590528 podman[255049]: 2026-01-21 14:22:50.362941492 +0000 UTC m=+0.084750931 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 3 op/s
Jan 21 09:22:50 np0005590528 nova_compute[239261]: 2026-01-21 14:22:50.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662144933880528 of space, bias 1.0, pg target 0.19986434801641584 quantized to 32 (current 32)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005335333974999345 of space, bias 4.0, pg target 0.6402400769999215 quantized to 16 (current 16)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:22:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:22:51 np0005590528 nova_compute[239261]: 2026-01-21 14:22:51.719 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:22:51 np0005590528 nova_compute[239261]: 2026-01-21 14:22:51.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:22:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 09:22:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:52 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "force": true, "format": "json"}]: dispatch
Jan 21 09:22:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 09:22:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75'' moved to trashcan
Jan 21 09:22:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:22:52 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 09:22:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s wr, 1 op/s
Jan 21 09:22:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:53 np0005590528 nova_compute[239261]: 2026-01-21 14:22:53.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:22:53 np0005590528 nova_compute[239261]: 2026-01-21 14:22:53.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:22:53 np0005590528 nova_compute[239261]: 2026-01-21 14:22:53.754 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:22:53 np0005590528 nova_compute[239261]: 2026-01-21 14:22:53.754 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:22:53 np0005590528 nova_compute[239261]: 2026-01-21 14:22:53.754 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:22:53 np0005590528 nova_compute[239261]: 2026-01-21 14:22:53.754 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:22:53 np0005590528 nova_compute[239261]: 2026-01-21 14:22:53.755 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:22:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:22:54 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243427951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:22:54 np0005590528 nova_compute[239261]: 2026-01-21 14:22:54.335 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:22:54 np0005590528 nova_compute[239261]: 2026-01-21 14:22:54.504 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:22:54 np0005590528 nova_compute[239261]: 2026-01-21 14:22:54.505 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:22:54 np0005590528 nova_compute[239261]: 2026-01-21 14:22:54.505 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:22:54 np0005590528 nova_compute[239261]: 2026-01-21 14:22:54.505 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:22:54 np0005590528 nova_compute[239261]: 2026-01-21 14:22:54.586 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:22:54 np0005590528 nova_compute[239261]: 2026-01-21 14:22:54.587 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:22:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 2 op/s
Jan 21 09:22:54 np0005590528 nova_compute[239261]: 2026-01-21 14:22:54.606 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:22:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:22:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446406851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:22:55 np0005590528 nova_compute[239261]: 2026-01-21 14:22:55.210 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:22:55 np0005590528 nova_compute[239261]: 2026-01-21 14:22:55.215 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:22:55 np0005590528 nova_compute[239261]: 2026-01-21 14:22:55.351 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:22:55 np0005590528 nova_compute[239261]: 2026-01-21 14:22:55.354 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:22:55 np0005590528 nova_compute[239261]: 2026-01-21 14:22:55.355 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:22:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143_93465ce5-7efa-45bd-b994-86ad6664e631", "force": true, "format": "json"}]: dispatch
Jan 21 09:22:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143_93465ce5-7efa-45bd-b994-86ad6664e631, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:22:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp'
Jan 21 09:22:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp' to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta'
Jan 21 09:22:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143_93465ce5-7efa-45bd-b994-86ad6664e631, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:22:55 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143", "force": true, "format": "json"}]: dispatch
Jan 21 09:22:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:22:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp'
Jan 21 09:22:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp' to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta'
Jan 21 09:22:55 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:22:56 np0005590528 nova_compute[239261]: 2026-01-21 14:22:56.355 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:22:56 np0005590528 nova_compute[239261]: 2026-01-21 14:22:56.356 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:22:56 np0005590528 nova_compute[239261]: 2026-01-21 14:22:56.356 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:22:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 3 op/s
Jan 21 09:22:56 np0005590528 nova_compute[239261]: 2026-01-21 14:22:56.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:22:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:22:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 3 op/s
Jan 21 09:22:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "format": "json"}]: dispatch
Jan 21 09:22:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:22:59 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:59.467+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cf9fedcb-41b1-4a3d-849f-ba456ffc232e' of type subvolume
Jan 21 09:22:59 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cf9fedcb-41b1-4a3d-849f-ba456ffc232e' of type subvolume
Jan 21 09:22:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "force": true, "format": "json"}]: dispatch
Jan 21 09:22:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:22:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e'' moved to trashcan
Jan 21 09:22:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:22:59 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 09:23:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 5 op/s
Jan 21 09:23:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 21 09:23:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 21 09:23:01 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 21 09:23:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 5 op/s
Jan 21 09:23:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 5 op/s
Jan 21 09:23:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 09:23:07 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:23:07.267 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:23:07 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:23:07.269 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:23:07 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:23:07.270 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:23:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 21 09:23:08 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 21 09:23:08 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 21 09:23:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 36 KiB/s wr, 2 op/s
Jan 21 09:23:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 235 B/s rd, 47 KiB/s wr, 3 op/s
Jan 21 09:23:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:23:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:23:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:23:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:23:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:23:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:23:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Jan 21 09:23:13 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 12 KiB/s wr, 1 op/s
Jan 21 09:23:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 09:23:18 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 09:23:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Jan 21 09:23:21 np0005590528 podman[255142]: 2026-01-21 14:23:21.353159481 +0000 UTC m=+0.069131442 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:23:21 np0005590528 podman[255141]: 2026-01-21 14:23:21.395540011 +0000 UTC m=+0.114757241 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 21 09:23:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:23:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:23:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4270217180' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:23:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:23:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4270217180' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:23:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:23:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:23:28 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:23:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:23:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:23:33 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:23:33.914 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:23:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:23:33.915 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:23:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:23:33.915 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:23:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:23:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:23:37 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:23:37 np0005590528 podman[255327]: 2026-01-21 14:23:37.786145314 +0000 UTC m=+0.047643379 container create 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:23:37 np0005590528 systemd[1]: Started libpod-conmon-2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82.scope.
Jan 21 09:23:37 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:23:37 np0005590528 podman[255327]: 2026-01-21 14:23:37.765764699 +0000 UTC m=+0.027262754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:23:37 np0005590528 podman[255327]: 2026-01-21 14:23:37.873673563 +0000 UTC m=+0.135171608 container init 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 09:23:37 np0005590528 podman[255327]: 2026-01-21 14:23:37.882108488 +0000 UTC m=+0.143606513 container start 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:23:37 np0005590528 podman[255327]: 2026-01-21 14:23:37.885843469 +0000 UTC m=+0.147341494 container attach 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:23:37 np0005590528 systemd[1]: libpod-2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82.scope: Deactivated successfully.
Jan 21 09:23:37 np0005590528 upbeat_montalcini[255343]: 167 167
Jan 21 09:23:37 np0005590528 conmon[255343]: conmon 2495f644454fdfafa71a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82.scope/container/memory.events
Jan 21 09:23:37 np0005590528 podman[255327]: 2026-01-21 14:23:37.889801145 +0000 UTC m=+0.151299170 container died 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 21 09:23:37 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d8c8a5a4791ac6e879d66693205d476f9c673e783577728d292039ede9bb8815-merged.mount: Deactivated successfully.
Jan 21 09:23:37 np0005590528 podman[255327]: 2026-01-21 14:23:37.936514821 +0000 UTC m=+0.198012846 container remove 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:23:37 np0005590528 systemd[1]: libpod-conmon-2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82.scope: Deactivated successfully.
Jan 21 09:23:38 np0005590528 podman[255366]: 2026-01-21 14:23:38.099688088 +0000 UTC m=+0.041712804 container create 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:23:38 np0005590528 systemd[1]: Started libpod-conmon-0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a.scope.
Jan 21 09:23:38 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:23:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:38 np0005590528 podman[255366]: 2026-01-21 14:23:38.080378439 +0000 UTC m=+0.022403195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:23:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:38 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:38 np0005590528 podman[255366]: 2026-01-21 14:23:38.192038463 +0000 UTC m=+0.134063239 container init 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:23:38 np0005590528 podman[255366]: 2026-01-21 14:23:38.197922757 +0000 UTC m=+0.139947483 container start 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 09:23:38 np0005590528 podman[255366]: 2026-01-21 14:23:38.20465435 +0000 UTC m=+0.146679096 container attach 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:23:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:23:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:38 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/9d3ee2ce-401b-4b9c-9303-f18eb5e8eade'.
Jan 21 09:23:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp'
Jan 21 09:23:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp' to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta'
Jan 21 09:23:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "format": "json"}]: dispatch
Jan 21 09:23:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:38 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:23:38 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:23:38 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:23:38 np0005590528 thirsty_bassi[255382]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:23:38 np0005590528 thirsty_bassi[255382]: --> All data devices are unavailable
Jan 21 09:23:38 np0005590528 systemd[1]: libpod-0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a.scope: Deactivated successfully.
Jan 21 09:23:38 np0005590528 podman[255366]: 2026-01-21 14:23:38.689854908 +0000 UTC m=+0.631879634 container died 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:23:38 np0005590528 systemd[1]: var-lib-containers-storage-overlay-0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12-merged.mount: Deactivated successfully.
Jan 21 09:23:38 np0005590528 podman[255366]: 2026-01-21 14:23:38.738497871 +0000 UTC m=+0.680522597 container remove 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:23:38 np0005590528 systemd[1]: libpod-conmon-0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a.scope: Deactivated successfully.
Jan 21 09:23:39 np0005590528 podman[255475]: 2026-01-21 14:23:39.296438847 +0000 UTC m=+0.056685689 container create 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:23:39 np0005590528 systemd[1]: Started libpod-conmon-5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5.scope.
Jan 21 09:23:39 np0005590528 podman[255475]: 2026-01-21 14:23:39.267680758 +0000 UTC m=+0.027927660 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:23:39 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:23:39 np0005590528 podman[255475]: 2026-01-21 14:23:39.394095402 +0000 UTC m=+0.154342244 container init 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:23:39 np0005590528 podman[255475]: 2026-01-21 14:23:39.402789183 +0000 UTC m=+0.163035995 container start 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:23:39 np0005590528 podman[255475]: 2026-01-21 14:23:39.406529994 +0000 UTC m=+0.166776836 container attach 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 09:23:39 np0005590528 systemd[1]: libpod-5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5.scope: Deactivated successfully.
Jan 21 09:23:39 np0005590528 flamboyant_lumiere[255491]: 167 167
Jan 21 09:23:39 np0005590528 conmon[255491]: conmon 5917245ff794758140f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5.scope/container/memory.events
Jan 21 09:23:39 np0005590528 podman[255475]: 2026-01-21 14:23:39.408979163 +0000 UTC m=+0.169225995 container died 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 09:23:39 np0005590528 systemd[1]: var-lib-containers-storage-overlay-10d51dc34ca73f0b5617bef2a5b01cf17fa3c8a186db35a052e340917e406537-merged.mount: Deactivated successfully.
Jan 21 09:23:39 np0005590528 podman[255475]: 2026-01-21 14:23:39.449597231 +0000 UTC m=+0.209844043 container remove 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 09:23:39 np0005590528 systemd[1]: libpod-conmon-5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5.scope: Deactivated successfully.
Jan 21 09:23:39 np0005590528 podman[255514]: 2026-01-21 14:23:39.634960868 +0000 UTC m=+0.052904857 container create 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:23:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:23:39
Jan 21 09:23:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:23:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:23:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'volumes', 'backups']
Jan 21 09:23:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:23:39 np0005590528 systemd[1]: Started libpod-conmon-5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c.scope.
Jan 21 09:23:39 np0005590528 podman[255514]: 2026-01-21 14:23:39.614250974 +0000 UTC m=+0.032195003 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:23:39 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:23:39 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6206f865ddd1e7835ae4a45ed231284db8455e9bc5d07d997581738374c168/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:39 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6206f865ddd1e7835ae4a45ed231284db8455e9bc5d07d997581738374c168/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:39 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6206f865ddd1e7835ae4a45ed231284db8455e9bc5d07d997581738374c168/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:39 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6206f865ddd1e7835ae4a45ed231284db8455e9bc5d07d997581738374c168/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:39 np0005590528 podman[255514]: 2026-01-21 14:23:39.735444432 +0000 UTC m=+0.153388441 container init 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 09:23:39 np0005590528 podman[255514]: 2026-01-21 14:23:39.742127523 +0000 UTC m=+0.160071512 container start 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 09:23:39 np0005590528 podman[255514]: 2026-01-21 14:23:39.745438525 +0000 UTC m=+0.163382524 container attach 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]: {
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:    "0": [
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:        {
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:            "devices": [
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:                "/dev/loop3"
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:            ],
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:            "lv_name": "ceph_lv0",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:            "lv_size": "21470642176",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:            "name": "ceph_lv0",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:            "tags": {
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:23:39 np0005590528 sweet_blackwell[255530]:                "ceph.cluster_name": "ceph",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.crush_device_class": "",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.encrypted": "0",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.objectstore": "bluestore",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.osd_id": "0",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.type": "block",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.vdo": "0",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.with_tpm": "0"
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            },
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "type": "block",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "vg_name": "ceph_vg0"
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:        }
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:    ],
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:    "1": [
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:        {
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "devices": [
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "/dev/loop4"
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            ],
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "lv_name": "ceph_lv1",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "lv_size": "21470642176",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "name": "ceph_lv1",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "tags": {
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.cluster_name": "ceph",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.crush_device_class": "",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.encrypted": "0",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.objectstore": "bluestore",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.osd_id": "1",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.type": "block",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.vdo": "0",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.with_tpm": "0"
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            },
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "type": "block",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "vg_name": "ceph_vg1"
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:        }
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:    ],
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:    "2": [
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:        {
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "devices": [
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "/dev/loop5"
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            ],
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "lv_name": "ceph_lv2",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "lv_size": "21470642176",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "name": "ceph_lv2",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "tags": {
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.cluster_name": "ceph",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.crush_device_class": "",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.encrypted": "0",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.objectstore": "bluestore",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.osd_id": "2",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.type": "block",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.vdo": "0",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:                "ceph.with_tpm": "0"
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            },
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "type": "block",
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:            "vg_name": "ceph_vg2"
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:        }
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]:    ]
Jan 21 09:23:40 np0005590528 sweet_blackwell[255530]: }
Jan 21 09:23:40 np0005590528 systemd[1]: libpod-5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c.scope: Deactivated successfully.
Jan 21 09:23:40 np0005590528 podman[255514]: 2026-01-21 14:23:40.021524257 +0000 UTC m=+0.439468246 container died 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 09:23:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s wr, 0 op/s
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:23:41 np0005590528 systemd[1]: var-lib-containers-storage-overlay-6b6206f865ddd1e7835ae4a45ed231284db8455e9bc5d07d997581738374c168-merged.mount: Deactivated successfully.
Jan 21 09:23:41 np0005590528 podman[255514]: 2026-01-21 14:23:41.070932493 +0000 UTC m=+1.488876482 container remove 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 09:23:41 np0005590528 systemd[1]: libpod-conmon-5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c.scope: Deactivated successfully.
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "snap_name": "a974555c-2f99-4804-bf49-5a8570c58762", "format": "json"}]: dispatch
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a974555c-2f99-4804-bf49-5a8570c58762, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:41 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a974555c-2f99-4804-bf49-5a8570c58762, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:41 np0005590528 podman[255616]: 2026-01-21 14:23:41.59805678 +0000 UTC m=+0.027267704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:23:41 np0005590528 podman[255616]: 2026-01-21 14:23:41.862691094 +0000 UTC m=+0.291901958 container create cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 21 09:23:41 np0005590528 systemd[1]: Started libpod-conmon-cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349.scope.
Jan 21 09:23:41 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:23:41 np0005590528 podman[255616]: 2026-01-21 14:23:41.969678866 +0000 UTC m=+0.398889730 container init cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 09:23:41 np0005590528 podman[255616]: 2026-01-21 14:23:41.976931893 +0000 UTC m=+0.406142757 container start cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:23:41 np0005590528 dreamy_babbage[255632]: 167 167
Jan 21 09:23:41 np0005590528 systemd[1]: libpod-cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349.scope: Deactivated successfully.
Jan 21 09:23:42 np0005590528 podman[255616]: 2026-01-21 14:23:42.008846408 +0000 UTC m=+0.438057302 container attach cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 09:23:42 np0005590528 podman[255616]: 2026-01-21 14:23:42.009769651 +0000 UTC m=+0.438980515 container died cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:23:42 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b10fa3b4837ca5f4eb89c6b954c39a0b8c047d927e5ab1a8f67b14c0d3b918dc-merged.mount: Deactivated successfully.
Jan 21 09:23:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:23:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:23:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:23:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:23:42 np0005590528 podman[255616]: 2026-01-21 14:23:42.136704068 +0000 UTC m=+0.565914932 container remove cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 09:23:42 np0005590528 systemd[1]: libpod-conmon-cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349.scope: Deactivated successfully.
Jan 21 09:23:42 np0005590528 podman[255656]: 2026-01-21 14:23:42.312698336 +0000 UTC m=+0.046399699 container create 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 09:23:42 np0005590528 systemd[1]: Started libpod-conmon-6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f.scope.
Jan 21 09:23:42 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:23:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43ea0c3778ceecba2f130cefef7a15dcf9f458dd48ad40650188f354ad2274c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43ea0c3778ceecba2f130cefef7a15dcf9f458dd48ad40650188f354ad2274c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43ea0c3778ceecba2f130cefef7a15dcf9f458dd48ad40650188f354ad2274c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:42 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43ea0c3778ceecba2f130cefef7a15dcf9f458dd48ad40650188f354ad2274c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:23:42 np0005590528 podman[255656]: 2026-01-21 14:23:42.292816293 +0000 UTC m=+0.026517686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:23:42 np0005590528 podman[255656]: 2026-01-21 14:23:42.39713182 +0000 UTC m=+0.130833203 container init 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:23:42 np0005590528 podman[255656]: 2026-01-21 14:23:42.403956296 +0000 UTC m=+0.137657659 container start 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 09:23:42 np0005590528 podman[255656]: 2026-01-21 14:23:42.411068309 +0000 UTC m=+0.144769672 container attach 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:23:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s wr, 0 op/s
Jan 21 09:23:43 np0005590528 lvm[255752]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:23:43 np0005590528 lvm[255751]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:23:43 np0005590528 lvm[255752]: VG ceph_vg1 finished
Jan 21 09:23:43 np0005590528 lvm[255751]: VG ceph_vg0 finished
Jan 21 09:23:43 np0005590528 lvm[255754]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:23:43 np0005590528 lvm[255754]: VG ceph_vg2 finished
Jan 21 09:23:43 np0005590528 hardcore_gates[255673]: {}
Jan 21 09:23:43 np0005590528 systemd[1]: libpod-6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f.scope: Deactivated successfully.
Jan 21 09:23:43 np0005590528 systemd[1]: libpod-6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f.scope: Consumed 1.314s CPU time.
Jan 21 09:23:43 np0005590528 podman[255656]: 2026-01-21 14:23:43.194545308 +0000 UTC m=+0.928246691 container died 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:23:43 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c43ea0c3778ceecba2f130cefef7a15dcf9f458dd48ad40650188f354ad2274c-merged.mount: Deactivated successfully.
Jan 21 09:23:43 np0005590528 podman[255656]: 2026-01-21 14:23:43.23205085 +0000 UTC m=+0.965752213 container remove 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:23:43 np0005590528 systemd[1]: libpod-conmon-6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f.scope: Deactivated successfully.
Jan 21 09:23:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:23:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:23:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:23:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:23:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:23:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:23:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 1 op/s
Jan 21 09:23:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 1 op/s
Jan 21 09:23:47 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "snap_name": "a974555c-2f99-4804-bf49-5a8570c58762_89f00eb7-0d7f-4bfa-aebe-ef725e504018", "force": true, "format": "json"}]: dispatch
Jan 21 09:23:47 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a974555c-2f99-4804-bf49-5a8570c58762_89f00eb7-0d7f-4bfa-aebe-ef725e504018, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp'
Jan 21 09:23:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp' to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta'
Jan 21 09:23:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a974555c-2f99-4804-bf49-5a8570c58762_89f00eb7-0d7f-4bfa-aebe-ef725e504018, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:48 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "snap_name": "a974555c-2f99-4804-bf49-5a8570c58762", "force": true, "format": "json"}]: dispatch
Jan 21 09:23:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a974555c-2f99-4804-bf49-5a8570c58762, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp'
Jan 21 09:23:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp' to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta'
Jan 21 09:23:48 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a974555c-2f99-4804-bf49-5a8570c58762, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 1 op/s
Jan 21 09:23:48 np0005590528 nova_compute[239261]: 2026-01-21 14:23:48.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 2 op/s
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662150522907583 of space, bias 1.0, pg target 0.1998645156872275 quantized to 32 (current 32)
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005554138018903753 of space, bias 4.0, pg target 0.6664965622684504 quantized to 16 (current 16)
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:23:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:23:51 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "format": "json"}]: dispatch
Jan 21 09:23:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:536a2c39-721a-4234-bb20-8865a7392cf1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:23:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:536a2c39-721a-4234-bb20-8865a7392cf1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:23:51 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '536a2c39-721a-4234-bb20-8865a7392cf1' of type subvolume
Jan 21 09:23:51 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:23:51.892+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '536a2c39-721a-4234-bb20-8865a7392cf1' of type subvolume
Jan 21 09:23:51 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "force": true, "format": "json"}]: dispatch
Jan 21 09:23:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1'' moved to trashcan
Jan 21 09:23:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:23:51 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 09:23:52 np0005590528 nova_compute[239261]: 2026-01-21 14:23:52.041 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:52 np0005590528 nova_compute[239261]: 2026-01-21 14:23:52.042 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:52 np0005590528 nova_compute[239261]: 2026-01-21 14:23:52.042 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:23:52 np0005590528 nova_compute[239261]: 2026-01-21 14:23:52.042 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:23:52 np0005590528 podman[255793]: 2026-01-21 14:23:52.329169524 +0000 UTC m=+0.053268866 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 21 09:23:52 np0005590528 nova_compute[239261]: 2026-01-21 14:23:52.378 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:23:52 np0005590528 nova_compute[239261]: 2026-01-21 14:23:52.378 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:52 np0005590528 podman[255792]: 2026-01-21 14:23:52.384641883 +0000 UTC m=+0.108765706 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 09:23:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s wr, 2 op/s
Jan 21 09:23:53 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:23:53 np0005590528 nova_compute[239261]: 2026-01-21 14:23:53.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:53 np0005590528 nova_compute[239261]: 2026-01-21 14:23:53.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s wr, 2 op/s
Jan 21 09:23:54 np0005590528 nova_compute[239261]: 2026-01-21 14:23:54.720 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:54 np0005590528 nova_compute[239261]: 2026-01-21 14:23:54.779 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:54 np0005590528 nova_compute[239261]: 2026-01-21 14:23:54.779 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:23:54 np0005590528 nova_compute[239261]: 2026-01-21 14:23:54.779 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:54 np0005590528 nova_compute[239261]: 2026-01-21 14:23:54.845 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:23:54 np0005590528 nova_compute[239261]: 2026-01-21 14:23:54.846 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:23:54 np0005590528 nova_compute[239261]: 2026-01-21 14:23:54.846 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:23:54 np0005590528 nova_compute[239261]: 2026-01-21 14:23:54.846 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:23:54 np0005590528 nova_compute[239261]: 2026-01-21 14:23:54.846 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:23:55 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:23:55 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1618769977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:23:55 np0005590528 nova_compute[239261]: 2026-01-21 14:23:55.354 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:23:55 np0005590528 nova_compute[239261]: 2026-01-21 14:23:55.499 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:23:55 np0005590528 nova_compute[239261]: 2026-01-21 14:23:55.500 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5014MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:23:55 np0005590528 nova_compute[239261]: 2026-01-21 14:23:55.501 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:23:55 np0005590528 nova_compute[239261]: 2026-01-21 14:23:55.501 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:23:55 np0005590528 nova_compute[239261]: 2026-01-21 14:23:55.984 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:23:55 np0005590528 nova_compute[239261]: 2026-01-21 14:23:55.984 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.113 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing inventories for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.192 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating ProviderTree inventory for provider 172aa181-ce4f-4953-808e-b8a26e60249f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.193 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating inventory in ProviderTree for provider 172aa181-ce4f-4953-808e-b8a26e60249f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.211 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing aggregate associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.237 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing trait associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.256 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:23:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 45 KiB/s wr, 3 op/s
Jan 21 09:23:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:23:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071421282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.768 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.773 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.854 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.856 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.856 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.857 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:56 np0005590528 nova_compute[239261]: 2026-01-21 14:23:56.857 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 21 09:23:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 21 09:23:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 21 09:23:57 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 21 09:23:57 np0005590528 nova_compute[239261]: 2026-01-21 14:23:57.819 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 54 KiB/s wr, 4 op/s
Jan 21 09:23:58 np0005590528 nova_compute[239261]: 2026-01-21 14:23:58.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:23:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 33 KiB/s wr, 3 op/s
Jan 21 09:24:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 33 KiB/s wr, 3 op/s
Jan 21 09:24:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:03 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 21 09:24:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 21 09:24:04 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 21 09:24:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s wr, 1 op/s
Jan 21 09:24:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 1 op/s
Jan 21 09:24:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 1 op/s
Jan 21 09:24:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s wr, 0 op/s
Jan 21 09:24:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:24:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:24:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:24:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:24:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:24:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:24:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s wr, 0 op/s
Jan 21 09:24:12 np0005590528 nova_compute[239261]: 2026-01-21 14:24:12.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:24:12 np0005590528 nova_compute[239261]: 2026-01-21 14:24:12.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 21 09:24:12 np0005590528 nova_compute[239261]: 2026-01-21 14:24:12.750 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 21 09:24:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s wr, 0 op/s
Jan 21 09:24:14 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:24:14.787 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:24:14 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:24:14.788 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:24:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 09:24:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 09:24:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:20 np0005590528 nova_compute[239261]: 2026-01-21 14:24:20.402 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:24:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 09:24:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:24:21 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6508 writes, 29K keys, 6508 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6508 writes, 6508 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1724 writes, 8397 keys, 1724 commit groups, 1.0 writes per commit group, ingest: 11.03 MB, 0.02 MB/s#012Interval WAL: 1724 writes, 1724 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     32.2      1.06              0.11        16    0.067       0      0       0.0       0.0#012  L6      1/0    8.68 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5     47.4     39.1      3.05              0.36        15    0.203     73K   8415       0.0       0.0#012 Sum      1/0    8.68 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     35.2     37.3      4.11              0.47        31    0.133     73K   8415       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0     52.6     53.9      0.89              0.15         8    0.111     24K   2633       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     47.4     39.1      3.05              0.36        15    0.203     73K   8415       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     32.2      1.06              0.11        15    0.071       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.15 GB write, 0.06 MB/s write, 0.14 GB read, 0.06 MB/s read, 4.1 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562240bf58d0#2 capacity: 304.00 MB usage: 16.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000183 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1026,15.64 MB,5.14359%) FilterBlock(32,213.55 KB,0.0685993%) IndexBlock(32,395.70 KB,0.127115%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 21 09:24:21 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:24:21.790 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:24:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:24:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:24:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/471390696' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:24:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:24:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/471390696' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:24:23 np0005590528 podman[255882]: 2026-01-21 14:24:23.369343608 +0000 UTC m=+0.087887288 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 21 09:24:23 np0005590528 podman[255881]: 2026-01-21 14:24:23.374244947 +0000 UTC m=+0.103525818 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 09:24:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:24:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:24:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:24:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:27 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/f62c360a-91ba-4a12-8a48-a3a783029d44'.
Jan 21 09:24:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp'
Jan 21 09:24:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp' to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta'
Jan 21 09:24:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:27 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "format": "json"}]: dispatch
Jan 21 09:24:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:27 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:27 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:24:27 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:24:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:24:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:29 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "snap_name": "129f980f-9630-48b1-bcde-e45a9ed0079b", "format": "json"}]: dispatch
Jan 21 09:24:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:29 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Jan 21 09:24:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Jan 21 09:24:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "snap_name": "129f980f-9630-48b1-bcde-e45a9ed0079b_ca15cc81-265c-4731-8934-f7ef13bd3c7e", "force": true, "format": "json"}]: dispatch
Jan 21 09:24:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b_ca15cc81-265c-4731-8934-f7ef13bd3c7e, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp'
Jan 21 09:24:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp' to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta'
Jan 21 09:24:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b_ca15cc81-265c-4731-8934-f7ef13bd3c7e, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:33 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "snap_name": "129f980f-9630-48b1-bcde-e45a9ed0079b", "force": true, "format": "json"}]: dispatch
Jan 21 09:24:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp'
Jan 21 09:24:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp' to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta'
Jan 21 09:24:33 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:24:33.915 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:24:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:24:33.916 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:24:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:24:33.916 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:24:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 1 op/s
Jan 21 09:24:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 2 op/s
Jan 21 09:24:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a5f701d9-3332-493b-805e-f694262123e2", "format": "json"}]: dispatch
Jan 21 09:24:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a5f701d9-3332-493b-805e-f694262123e2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:24:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a5f701d9-3332-493b-805e-f694262123e2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:24:37 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:24:37.030+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a5f701d9-3332-493b-805e-f694262123e2' of type subvolume
Jan 21 09:24:37 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a5f701d9-3332-493b-805e-f694262123e2' of type subvolume
Jan 21 09:24:37 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "force": true, "format": "json"}]: dispatch
Jan 21 09:24:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2'' moved to trashcan
Jan 21 09:24:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:24:37 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 09:24:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 2 op/s
Jan 21 09:24:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:24:39
Jan 21 09:24:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:24:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:24:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'volumes']
Jan 21 09:24:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:24:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 4 op/s
Jan 21 09:24:40 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:24:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:40 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/5b0d28ac-7ccd-4441-b34b-f4cb942173d6'.
Jan 21 09:24:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp'
Jan 21 09:24:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp' to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta'
Jan 21 09:24:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:40 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "format": "json"}]: dispatch
Jan 21 09:24:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:40 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:24:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:24:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:24:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:24:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:24:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:24:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:24:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 56 KiB/s wr, 4 op/s
Jan 21 09:24:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 21 09:24:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 21 09:24:42 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:24:44 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "snap_name": "d9839651-f469-4415-89ae-cc62bff4e10f", "format": "json"}]: dispatch
Jan 21 09:24:44 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:24:44 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:24:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Jan 21 09:24:44 np0005590528 podman[256069]: 2026-01-21 14:24:44.70871789 +0000 UTC m=+0.024615849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:24:44 np0005590528 podman[256069]: 2026-01-21 14:24:44.843833635 +0000 UTC m=+0.159731584 container create 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:24:44 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:24:44 np0005590528 systemd[1]: Started libpod-conmon-7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2.scope.
Jan 21 09:24:44 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:24:45 np0005590528 podman[256069]: 2026-01-21 14:24:45.176126705 +0000 UTC m=+0.492024684 container init 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:24:45 np0005590528 podman[256069]: 2026-01-21 14:24:45.183584247 +0000 UTC m=+0.499482176 container start 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 09:24:45 np0005590528 podman[256069]: 2026-01-21 14:24:45.187393068 +0000 UTC m=+0.503291017 container attach 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:24:45 np0005590528 happy_wozniak[256085]: 167 167
Jan 21 09:24:45 np0005590528 systemd[1]: libpod-7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2.scope: Deactivated successfully.
Jan 21 09:24:45 np0005590528 conmon[256085]: conmon 7b15118e68ac88ff9385 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2.scope/container/memory.events
Jan 21 09:24:45 np0005590528 podman[256069]: 2026-01-21 14:24:45.192029002 +0000 UTC m=+0.507926951 container died 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:24:45 np0005590528 systemd[1]: var-lib-containers-storage-overlay-88193525b2af2954c81ffe4b73e80de7d78b705b68b47415e69939f716d30e51-merged.mount: Deactivated successfully.
Jan 21 09:24:45 np0005590528 podman[256069]: 2026-01-21 14:24:45.29722465 +0000 UTC m=+0.613122589 container remove 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:24:45 np0005590528 systemd[1]: libpod-conmon-7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2.scope: Deactivated successfully.
Jan 21 09:24:45 np0005590528 podman[256109]: 2026-01-21 14:24:45.487925116 +0000 UTC m=+0.060533102 container create ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 09:24:45 np0005590528 systemd[1]: Started libpod-conmon-ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c.scope.
Jan 21 09:24:45 np0005590528 podman[256109]: 2026-01-21 14:24:45.459338692 +0000 UTC m=+0.031946708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:24:45 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:24:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:45 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:45 np0005590528 podman[256109]: 2026-01-21 14:24:45.592634563 +0000 UTC m=+0.165242579 container init ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:24:45 np0005590528 podman[256109]: 2026-01-21 14:24:45.600833172 +0000 UTC m=+0.173441158 container start ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:24:45 np0005590528 podman[256109]: 2026-01-21 14:24:45.610580259 +0000 UTC m=+0.183188275 container attach ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:24:46 np0005590528 fervent_hopper[256125]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:24:46 np0005590528 fervent_hopper[256125]: --> All data devices are unavailable
Jan 21 09:24:46 np0005590528 systemd[1]: libpod-ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c.scope: Deactivated successfully.
Jan 21 09:24:46 np0005590528 podman[256109]: 2026-01-21 14:24:46.155236932 +0000 UTC m=+0.727844918 container died ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 09:24:46 np0005590528 systemd[1]: var-lib-containers-storage-overlay-e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc-merged.mount: Deactivated successfully.
Jan 21 09:24:46 np0005590528 podman[256109]: 2026-01-21 14:24:46.309134003 +0000 UTC m=+0.881741989 container remove ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 09:24:46 np0005590528 systemd[1]: libpod-conmon-ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c.scope: Deactivated successfully.
Jan 21 09:24:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Jan 21 09:24:46 np0005590528 podman[256222]: 2026-01-21 14:24:46.779158822 +0000 UTC m=+0.048483209 container create aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 09:24:46 np0005590528 podman[256222]: 2026-01-21 14:24:46.758177232 +0000 UTC m=+0.027501659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:24:46 np0005590528 systemd[1]: Started libpod-conmon-aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625.scope.
Jan 21 09:24:46 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:24:47 np0005590528 podman[256222]: 2026-01-21 14:24:47.093446205 +0000 UTC m=+0.362770642 container init aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 09:24:47 np0005590528 podman[256222]: 2026-01-21 14:24:47.102102865 +0000 UTC m=+0.371427252 container start aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 09:24:47 np0005590528 crazy_chebyshev[256239]: 167 167
Jan 21 09:24:47 np0005590528 systemd[1]: libpod-aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625.scope: Deactivated successfully.
Jan 21 09:24:47 np0005590528 podman[256222]: 2026-01-21 14:24:47.367417365 +0000 UTC m=+0.636741782 container attach aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:24:47 np0005590528 podman[256222]: 2026-01-21 14:24:47.36802385 +0000 UTC m=+0.637348247 container died aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 09:24:47 np0005590528 systemd[1]: var-lib-containers-storage-overlay-7a8507ec3b4476a3935871c3a5233e39ec0ba565fd6f39b595ad391d804897bd-merged.mount: Deactivated successfully.
Jan 21 09:24:47 np0005590528 podman[256222]: 2026-01-21 14:24:47.464719742 +0000 UTC m=+0.734044149 container remove aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 09:24:47 np0005590528 systemd[1]: libpod-conmon-aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625.scope: Deactivated successfully.
Jan 21 09:24:47 np0005590528 podman[256261]: 2026-01-21 14:24:47.724834967 +0000 UTC m=+0.110664882 container create 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:24:47 np0005590528 podman[256261]: 2026-01-21 14:24:47.642969136 +0000 UTC m=+0.028799121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:24:47 np0005590528 systemd[1]: Started libpod-conmon-3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b.scope.
Jan 21 09:24:47 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:24:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887cd5d59d253afc6a15e709f80af5c1a72dc69bc4e75f5619ed23a1cece62f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887cd5d59d253afc6a15e709f80af5c1a72dc69bc4e75f5619ed23a1cece62f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887cd5d59d253afc6a15e709f80af5c1a72dc69bc4e75f5619ed23a1cece62f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:47 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887cd5d59d253afc6a15e709f80af5c1a72dc69bc4e75f5619ed23a1cece62f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:47 np0005590528 podman[256261]: 2026-01-21 14:24:47.809125616 +0000 UTC m=+0.194955541 container init 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 09:24:47 np0005590528 podman[256261]: 2026-01-21 14:24:47.81629582 +0000 UTC m=+0.202125725 container start 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:24:47 np0005590528 podman[256261]: 2026-01-21 14:24:47.821428015 +0000 UTC m=+0.207257950 container attach 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:24:48 np0005590528 brave_wu[256278]: {
Jan 21 09:24:48 np0005590528 brave_wu[256278]:    "0": [
Jan 21 09:24:48 np0005590528 brave_wu[256278]:        {
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "devices": [
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "/dev/loop3"
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            ],
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_name": "ceph_lv0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_size": "21470642176",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "name": "ceph_lv0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "tags": {
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.cluster_name": "ceph",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.crush_device_class": "",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.encrypted": "0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.objectstore": "bluestore",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.osd_id": "0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.type": "block",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.vdo": "0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.with_tpm": "0"
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            },
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "type": "block",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "vg_name": "ceph_vg0"
Jan 21 09:24:48 np0005590528 brave_wu[256278]:        }
Jan 21 09:24:48 np0005590528 brave_wu[256278]:    ],
Jan 21 09:24:48 np0005590528 brave_wu[256278]:    "1": [
Jan 21 09:24:48 np0005590528 brave_wu[256278]:        {
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "devices": [
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "/dev/loop4"
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            ],
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_name": "ceph_lv1",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_size": "21470642176",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "name": "ceph_lv1",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "tags": {
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.cluster_name": "ceph",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.crush_device_class": "",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.encrypted": "0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.objectstore": "bluestore",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.osd_id": "1",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.type": "block",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.vdo": "0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.with_tpm": "0"
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            },
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "type": "block",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "vg_name": "ceph_vg1"
Jan 21 09:24:48 np0005590528 brave_wu[256278]:        }
Jan 21 09:24:48 np0005590528 brave_wu[256278]:    ],
Jan 21 09:24:48 np0005590528 brave_wu[256278]:    "2": [
Jan 21 09:24:48 np0005590528 brave_wu[256278]:        {
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "devices": [
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "/dev/loop5"
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            ],
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_name": "ceph_lv2",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_size": "21470642176",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "name": "ceph_lv2",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "tags": {
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.cluster_name": "ceph",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.crush_device_class": "",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.encrypted": "0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.objectstore": "bluestore",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.osd_id": "2",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.type": "block",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.vdo": "0",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:                "ceph.with_tpm": "0"
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            },
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "type": "block",
Jan 21 09:24:48 np0005590528 brave_wu[256278]:            "vg_name": "ceph_vg2"
Jan 21 09:24:48 np0005590528 brave_wu[256278]:        }
Jan 21 09:24:48 np0005590528 brave_wu[256278]:    ]
Jan 21 09:24:48 np0005590528 brave_wu[256278]: }
Jan 21 09:24:48 np0005590528 systemd[1]: libpod-3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b.scope: Deactivated successfully.
Jan 21 09:24:48 np0005590528 podman[256261]: 2026-01-21 14:24:48.129634729 +0000 UTC m=+0.515464644 container died 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:24:48 np0005590528 systemd[1]: var-lib-containers-storage-overlay-887cd5d59d253afc6a15e709f80af5c1a72dc69bc4e75f5619ed23a1cece62f8-merged.mount: Deactivated successfully.
Jan 21 09:24:48 np0005590528 podman[256261]: 2026-01-21 14:24:48.181518341 +0000 UTC m=+0.567348246 container remove 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:24:48 np0005590528 systemd[1]: libpod-conmon-3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b.scope: Deactivated successfully.
Jan 21 09:24:48 np0005590528 podman[256360]: 2026-01-21 14:24:48.633622464 +0000 UTC m=+0.044160826 container create 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:24:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Jan 21 09:24:48 np0005590528 systemd[1]: Started libpod-conmon-7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25.scope.
Jan 21 09:24:48 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:24:48 np0005590528 podman[256360]: 2026-01-21 14:24:48.707739746 +0000 UTC m=+0.118278128 container init 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:24:48 np0005590528 podman[256360]: 2026-01-21 14:24:48.618408874 +0000 UTC m=+0.028947256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:24:48 np0005590528 podman[256360]: 2026-01-21 14:24:48.716829206 +0000 UTC m=+0.127367608 container start 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:24:48 np0005590528 podman[256360]: 2026-01-21 14:24:48.720920626 +0000 UTC m=+0.131459018 container attach 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 09:24:48 np0005590528 intelligent_goldwasser[256377]: 167 167
Jan 21 09:24:48 np0005590528 systemd[1]: libpod-7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25.scope: Deactivated successfully.
Jan 21 09:24:48 np0005590528 podman[256360]: 2026-01-21 14:24:48.721827058 +0000 UTC m=+0.132365440 container died 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 09:24:48 np0005590528 systemd[1]: var-lib-containers-storage-overlay-3164e00fccf5431cb6b5d67db420ccb47a7bf25e645484aec9e90940d1e886a5-merged.mount: Deactivated successfully.
Jan 21 09:24:48 np0005590528 podman[256360]: 2026-01-21 14:24:48.765438588 +0000 UTC m=+0.175976980 container remove 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 09:24:48 np0005590528 systemd[1]: libpod-conmon-7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25.scope: Deactivated successfully.
Jan 21 09:24:48 np0005590528 podman[256402]: 2026-01-21 14:24:48.964437897 +0000 UTC m=+0.054293651 container create 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:24:49 np0005590528 systemd[1]: Started libpod-conmon-80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9.scope.
Jan 21 09:24:49 np0005590528 podman[256402]: 2026-01-21 14:24:48.938484836 +0000 UTC m=+0.028340670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:24:49 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:24:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1059d36898f6fde2fe0e44fb6b190035723e05127a80f37da43820fc7da30a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1059d36898f6fde2fe0e44fb6b190035723e05127a80f37da43820fc7da30a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1059d36898f6fde2fe0e44fb6b190035723e05127a80f37da43820fc7da30a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:49 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1059d36898f6fde2fe0e44fb6b190035723e05127a80f37da43820fc7da30a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:24:49 np0005590528 podman[256402]: 2026-01-21 14:24:49.066110139 +0000 UTC m=+0.155965913 container init 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 09:24:49 np0005590528 podman[256402]: 2026-01-21 14:24:49.072664258 +0000 UTC m=+0.162520002 container start 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:24:49 np0005590528 podman[256402]: 2026-01-21 14:24:49.07642093 +0000 UTC m=+0.166276704 container attach 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:24:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 21 09:24:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "snap_name": "d9839651-f469-4415-89ae-cc62bff4e10f_58879d21-f788-4eae-af79-848cdb9584de", "force": true, "format": "json"}]: dispatch
Jan 21 09:24:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f_58879d21-f788-4eae-af79-848cdb9584de, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:49 np0005590528 lvm[256496]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:24:49 np0005590528 lvm[256496]: VG ceph_vg0 finished
Jan 21 09:24:49 np0005590528 lvm[256497]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:24:49 np0005590528 lvm[256497]: VG ceph_vg1 finished
Jan 21 09:24:49 np0005590528 lvm[256499]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:24:49 np0005590528 lvm[256499]: VG ceph_vg2 finished
Jan 21 09:24:49 np0005590528 flamboyant_lehmann[256418]: {}
Jan 21 09:24:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 21 09:24:49 np0005590528 systemd[1]: libpod-80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9.scope: Deactivated successfully.
Jan 21 09:24:49 np0005590528 systemd[1]: libpod-80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9.scope: Consumed 1.321s CPU time.
Jan 21 09:24:49 np0005590528 podman[256402]: 2026-01-21 14:24:49.899906623 +0000 UTC m=+0.989762427 container died 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:24:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp'
Jan 21 09:24:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp' to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta'
Jan 21 09:24:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f_58879d21-f788-4eae-af79-848cdb9584de, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "snap_name": "d9839651-f469-4415-89ae-cc62bff4e10f", "force": true, "format": "json"}]: dispatch
Jan 21 09:24:49 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:50 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 21 09:24:50 np0005590528 systemd[1]: var-lib-containers-storage-overlay-ec1059d36898f6fde2fe0e44fb6b190035723e05127a80f37da43820fc7da30a-merged.mount: Deactivated successfully.
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp'
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp' to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta'
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:50 np0005590528 podman[256402]: 2026-01-21 14:24:50.467220917 +0000 UTC m=+1.557076691 container remove 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:24:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:24:50 np0005590528 systemd[1]: libpod-conmon-80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9.scope: Deactivated successfully.
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s wr, 3 op/s
Jan 21 09:24:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:24:50 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:24:50 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662203308163098 of space, bias 1.0, pg target 0.19986609924489293 quantized to 32 (current 32)
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005818088826097966 of space, bias 4.0, pg target 0.6981706591317559 quantized to 16 (current 16)
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:24:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:24:51 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:24:51 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:24:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 2 op/s
Jan 21 09:24:53 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "format": "json"}]: dispatch
Jan 21 09:24:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4564206e-a1af-4abb-a427-9d87957a49e0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:24:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4564206e-a1af-4abb-a427-9d87957a49e0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:24:53 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:24:53.372+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4564206e-a1af-4abb-a427-9d87957a49e0' of type subvolume
Jan 21 09:24:53 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4564206e-a1af-4abb-a427-9d87957a49e0' of type subvolume
Jan 21 09:24:53 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "force": true, "format": "json"}]: dispatch
Jan 21 09:24:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0'' moved to trashcan
Jan 21 09:24:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:24:53 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 09:24:53 np0005590528 nova_compute[239261]: 2026-01-21 14:24:53.740 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:24:53 np0005590528 nova_compute[239261]: 2026-01-21 14:24:53.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:24:53 np0005590528 nova_compute[239261]: 2026-01-21 14:24:53.741 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:24:53 np0005590528 nova_compute[239261]: 2026-01-21 14:24:53.741 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:24:53 np0005590528 nova_compute[239261]: 2026-01-21 14:24:53.758 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:24:53 np0005590528 nova_compute[239261]: 2026-01-21 14:24:53.759 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:24:53 np0005590528 nova_compute[239261]: 2026-01-21 14:24:53.760 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:24:54 np0005590528 podman[256541]: 2026-01-21 14:24:54.344325867 +0000 UTC m=+0.065281067 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 21 09:24:54 np0005590528 podman[256540]: 2026-01-21 14:24:54.378478218 +0000 UTC m=+0.094729294 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:24:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s wr, 2 op/s
Jan 21 09:24:55 np0005590528 nova_compute[239261]: 2026-01-21 14:24:55.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:24:55 np0005590528 nova_compute[239261]: 2026-01-21 14:24:55.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:24:55 np0005590528 nova_compute[239261]: 2026-01-21 14:24:55.761 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:24:55 np0005590528 nova_compute[239261]: 2026-01-21 14:24:55.762 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:24:55 np0005590528 nova_compute[239261]: 2026-01-21 14:24:55.762 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:24:55 np0005590528 nova_compute[239261]: 2026-01-21 14:24:55.762 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:24:55 np0005590528 nova_compute[239261]: 2026-01-21 14:24:55.762 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:24:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:24:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943741548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:24:56 np0005590528 nova_compute[239261]: 2026-01-21 14:24:56.353 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:24:56 np0005590528 nova_compute[239261]: 2026-01-21 14:24:56.527 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:24:56 np0005590528 nova_compute[239261]: 2026-01-21 14:24:56.529 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4991MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:24:56 np0005590528 nova_compute[239261]: 2026-01-21 14:24:56.529 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:24:56 np0005590528 nova_compute[239261]: 2026-01-21 14:24:56.529 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:24:56 np0005590528 nova_compute[239261]: 2026-01-21 14:24:56.613 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:24:56 np0005590528 nova_compute[239261]: 2026-01-21 14:24:56.613 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:24:56 np0005590528 nova_compute[239261]: 2026-01-21 14:24:56.632 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:24:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 3 op/s
Jan 21 09:24:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:24:57 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2528613723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:24:57 np0005590528 nova_compute[239261]: 2026-01-21 14:24:57.183 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:24:57 np0005590528 nova_compute[239261]: 2026-01-21 14:24:57.188 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:24:57 np0005590528 nova_compute[239261]: 2026-01-21 14:24:57.227 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:24:57 np0005590528 nova_compute[239261]: 2026-01-21 14:24:57.229 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:24:57 np0005590528 nova_compute[239261]: 2026-01-21 14:24:57.230 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:24:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 21 09:24:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 21 09:24:57 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 21 09:24:58 np0005590528 nova_compute[239261]: 2026-01-21 14:24:58.230 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:24:58 np0005590528 nova_compute[239261]: 2026-01-21 14:24:58.230 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:24:58 np0005590528 nova_compute[239261]: 2026-01-21 14:24:58.230 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:24:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 09:24:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:24:59 np0005590528 nova_compute[239261]: 2026-01-21 14:24:59.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:25:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 4 op/s
Jan 21 09:25:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 4 op/s
Jan 21 09:25:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 21 09:25:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 21 09:25:04 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 21 09:25:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 32 KiB/s wr, 2 op/s
Jan 21 09:25:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 225 B/s rd, 29 KiB/s wr, 2 op/s
Jan 21 09:25:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 09:25:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:07 np0005590528 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/f269e34a-77c5-41c7-8925-5dbbabe47fe9'.
Jan 21 09:25:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp'
Jan 21 09:25:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp' to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta'
Jan 21 09:25:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:07 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "format": "json"}]: dispatch
Jan 21 09:25:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:07 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:07 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 09:25:07 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 09:25:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Jan 21 09:25:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:10 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "snap_name": "11cda10a-e8ff-460e-8c56-b778054d00c7", "format": "json"}]: dispatch
Jan 21 09:25:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:10 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 09:25:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:25:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:25:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:25:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:25:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:25:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:25:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 09:25:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s wr, 1 op/s
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "snap_name": "11cda10a-e8ff-460e-8c56-b778054d00c7_31812bf0-d81e-40c3-a226-5318705677c6", "force": true, "format": "json"}]: dispatch
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7_31812bf0-d81e-40c3-a226-5318705677c6, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp'
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp' to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta'
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7_31812bf0-d81e-40c3-a226-5318705677c6, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "snap_name": "11cda10a-e8ff-460e-8c56-b778054d00c7", "force": true, "format": "json"}]: dispatch
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp'
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp' to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta'
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 09:25:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 09:25:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "format": "json"}]: dispatch
Jan 21 09:25:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:25:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 09:25:19 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '074ccac6-f42c-493c-9d0b-aab404cacaf6' of type subvolume
Jan 21 09:25:19 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:25:19.626+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '074ccac6-f42c-493c-9d0b-aab404cacaf6' of type subvolume
Jan 21 09:25:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:19 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "force": true, "format": "json"}]: dispatch
Jan 21 09:25:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6'' moved to trashcan
Jan 21 09:25:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 09:25:19 np0005590528 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 09:25:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 09:25:22 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 21 09:25:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 2 op/s
Jan 21 09:25:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 21 09:25:23 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 21 09:25:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:25:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/223500051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:25:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:25:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/223500051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:25:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 55 KiB/s wr, 3 op/s
Jan 21 09:25:25 np0005590528 podman[256629]: 2026-01-21 14:25:25.356518091 +0000 UTC m=+0.077690491 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 21 09:25:25 np0005590528 podman[256628]: 2026-01-21 14:25:25.370210913 +0000 UTC m=+0.086970436 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 21 09:25:26 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:25:26.460 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 21 09:25:26 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:25:26.462 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 21 09:25:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Jan 21 09:25:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:25:27 np0005590528 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 9884 writes, 35K keys, 9884 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9884 writes, 2661 syncs, 3.71 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2879 writes, 8594 keys, 2879 commit groups, 1.0 writes per commit group, ingest: 11.19 MB, 0.02 MB/s#012Interval WAL: 2879 writes, 1188 syncs, 2.42 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 09:25:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.848756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529848825, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2392, "num_deletes": 510, "total_data_size": 3625397, "memory_usage": 3687456, "flush_reason": "Manual Compaction"}
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529874267, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3567065, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28597, "largest_seqno": 30988, "table_properties": {"data_size": 3556652, "index_size": 6139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 24805, "raw_average_key_size": 19, "raw_value_size": 3533546, "raw_average_value_size": 2811, "num_data_blocks": 270, "num_entries": 1257, "num_filter_entries": 1257, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769005327, "oldest_key_time": 1769005327, "file_creation_time": 1769005529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 25543 microseconds, and 9079 cpu microseconds.
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.874308) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3567065 bytes OK
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.874326) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.877751) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.877773) EVENT_LOG_v1 {"time_micros": 1769005529877767, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.877793) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3614275, prev total WAL file size 3614275, number of live WAL files 2.
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.878952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3483KB)], [62(8891KB)]
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529879031, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12671830, "oldest_snapshot_seqno": -1}
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6039 keys, 10914663 bytes, temperature: kUnknown
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529955480, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10914663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10871730, "index_size": 26759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 152118, "raw_average_key_size": 25, "raw_value_size": 10760917, "raw_average_value_size": 1781, "num_data_blocks": 1095, "num_entries": 6039, "num_filter_entries": 6039, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769005529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.955777) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10914663 bytes
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.957596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.5 rd, 142.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.7 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(6.6) write-amplify(3.1) OK, records in: 7076, records dropped: 1037 output_compression: NoCompression
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.957611) EVENT_LOG_v1 {"time_micros": 1769005529957603, "job": 34, "event": "compaction_finished", "compaction_time_micros": 76574, "compaction_time_cpu_micros": 31531, "output_level": 6, "num_output_files": 1, "total_output_size": 10914663, "num_input_records": 7076, "num_output_records": 6039, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529958387, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529959764, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.878842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.959895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.959901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.959902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.959904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:25:29 np0005590528 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.959906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 09:25:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Jan 21 09:25:31 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:25:31.464 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 21 09:25:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:25:32 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 14K writes, 54K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4725 syncs, 3.15 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4491 writes, 14K keys, 4491 commit groups, 1.0 writes per commit group, ingest: 21.35 MB, 0.04 MB/s#012Interval WAL: 4491 writes, 1896 syncs, 2.37 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 09:25:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Jan 21 09:25:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:25:33.917 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:25:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:25:33.918 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:25:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:25:33.918 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:25:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 21 09:25:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 21 09:25:34 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 21 09:25:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 1 op/s
Jan 21 09:25:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 0 op/s
Jan 21 09:25:37 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:25:37 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 9322 writes, 33K keys, 9322 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9322 writes, 2294 syncs, 4.06 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2348 writes, 6511 keys, 2348 commit groups, 1.0 writes per commit group, ingest: 6.88 MB, 0.01 MB/s#012Interval WAL: 2348 writes, 874 syncs, 2.69 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 09:25:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 0 op/s
Jan 21 09:25:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:25:39
Jan 21 09:25:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:25:39 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 09:25:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:25:39 np0005590528 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 09:25:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'vms']
Jan 21 09:25:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:25:40 np0005590528 ceph-mgr[75322]: [devicehealth INFO root] Check health
Jan 21 09:25:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:25:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:25:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:25:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:25:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:25:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:25:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:25:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:25:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:25:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:25:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662106431694153 of space, bias 1.0, pg target 0.19986319295082458 quantized to 32 (current 32)
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006057279001434559 of space, bias 4.0, pg target 0.726873480172147 quantized to 16 (current 16)
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 09:25:50 np0005590528 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:25:51 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:25:52 np0005590528 podman[256815]: 2026-01-21 14:25:52.044453193 +0000 UTC m=+0.049622018 container create 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 09:25:52 np0005590528 systemd[1]: Started libpod-conmon-4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea.scope.
Jan 21 09:25:52 np0005590528 podman[256815]: 2026-01-21 14:25:52.01886958 +0000 UTC m=+0.024038465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:25:52 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:25:52 np0005590528 podman[256815]: 2026-01-21 14:25:52.147311554 +0000 UTC m=+0.152480459 container init 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:25:52 np0005590528 podman[256815]: 2026-01-21 14:25:52.157731177 +0000 UTC m=+0.162900002 container start 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 09:25:52 np0005590528 podman[256815]: 2026-01-21 14:25:52.164527622 +0000 UTC m=+0.169696517 container attach 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:25:52 np0005590528 gallant_easley[256831]: 167 167
Jan 21 09:25:52 np0005590528 systemd[1]: libpod-4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea.scope: Deactivated successfully.
Jan 21 09:25:52 np0005590528 podman[256815]: 2026-01-21 14:25:52.167603178 +0000 UTC m=+0.172771993 container died 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 09:25:52 np0005590528 systemd[1]: var-lib-containers-storage-overlay-31f818bc57662ad9fad716c0752e57bca7cfebc80397d3f8504a5e80c0ef0286-merged.mount: Deactivated successfully.
Jan 21 09:25:52 np0005590528 podman[256815]: 2026-01-21 14:25:52.226837298 +0000 UTC m=+0.232006113 container remove 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 09:25:52 np0005590528 systemd[1]: libpod-conmon-4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea.scope: Deactivated successfully.
Jan 21 09:25:52 np0005590528 podman[256860]: 2026-01-21 14:25:52.443512256 +0000 UTC m=+0.051506893 container create 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 09:25:52 np0005590528 systemd[1]: Started libpod-conmon-894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5.scope.
Jan 21 09:25:52 np0005590528 podman[256860]: 2026-01-21 14:25:52.418964179 +0000 UTC m=+0.026958856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:25:52 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:25:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:52 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:52 np0005590528 podman[256860]: 2026-01-21 14:25:52.534907088 +0000 UTC m=+0.142901755 container init 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 09:25:52 np0005590528 podman[256860]: 2026-01-21 14:25:52.544276516 +0000 UTC m=+0.152271153 container start 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 09:25:52 np0005590528 podman[256860]: 2026-01-21 14:25:52.548845677 +0000 UTC m=+0.156840344 container attach 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:25:52 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:25:53 np0005590528 blissful_maxwell[256877]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:25:53 np0005590528 blissful_maxwell[256877]: --> All data devices are unavailable
Jan 21 09:25:53 np0005590528 systemd[1]: libpod-894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5.scope: Deactivated successfully.
Jan 21 09:25:53 np0005590528 podman[256860]: 2026-01-21 14:25:53.041055874 +0000 UTC m=+0.649050541 container died 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 09:25:53 np0005590528 systemd[1]: var-lib-containers-storage-overlay-d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11-merged.mount: Deactivated successfully.
Jan 21 09:25:53 np0005590528 podman[256860]: 2026-01-21 14:25:53.106845674 +0000 UTC m=+0.714840311 container remove 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:25:53 np0005590528 systemd[1]: libpod-conmon-894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5.scope: Deactivated successfully.
Jan 21 09:25:53 np0005590528 podman[256972]: 2026-01-21 14:25:53.543390909 +0000 UTC m=+0.039902591 container create d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:25:53 np0005590528 systemd[1]: Started libpod-conmon-d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517.scope.
Jan 21 09:25:53 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:25:53 np0005590528 podman[256972]: 2026-01-21 14:25:53.618090245 +0000 UTC m=+0.114601937 container init d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:25:53 np0005590528 podman[256972]: 2026-01-21 14:25:53.52571566 +0000 UTC m=+0.022227332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:25:53 np0005590528 podman[256972]: 2026-01-21 14:25:53.628770245 +0000 UTC m=+0.125281897 container start d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 09:25:53 np0005590528 hopeful_chatelet[256989]: 167 167
Jan 21 09:25:53 np0005590528 podman[256972]: 2026-01-21 14:25:53.63225308 +0000 UTC m=+0.128764752 container attach d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 21 09:25:53 np0005590528 systemd[1]: libpod-d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517.scope: Deactivated successfully.
Jan 21 09:25:53 np0005590528 podman[256994]: 2026-01-21 14:25:53.674783104 +0000 UTC m=+0.027649244 container died d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 09:25:53 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b0b3d84df63104fa5f60abd8ddb823a0d10b598b624cd73827e5c3af161851ac-merged.mount: Deactivated successfully.
Jan 21 09:25:53 np0005590528 podman[256994]: 2026-01-21 14:25:53.714424147 +0000 UTC m=+0.067290267 container remove d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 09:25:53 np0005590528 systemd[1]: libpod-conmon-d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517.scope: Deactivated successfully.
Jan 21 09:25:54 np0005590528 podman[257016]: 2026-01-21 14:25:53.938227249 +0000 UTC m=+0.037273136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:25:54 np0005590528 podman[257016]: 2026-01-21 14:25:54.358818366 +0000 UTC m=+0.457864183 container create d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 09:25:54 np0005590528 systemd[1]: Started libpod-conmon-d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932.scope.
Jan 21 09:25:54 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:25:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4776ea17d4be847d358f972ab6dd510bac4cbed03d20bc5e056574d2314ea2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4776ea17d4be847d358f972ab6dd510bac4cbed03d20bc5e056574d2314ea2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4776ea17d4be847d358f972ab6dd510bac4cbed03d20bc5e056574d2314ea2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:54 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4776ea17d4be847d358f972ab6dd510bac4cbed03d20bc5e056574d2314ea2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:54 np0005590528 podman[257016]: 2026-01-21 14:25:54.451825827 +0000 UTC m=+0.550871654 container init d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:25:54 np0005590528 podman[257016]: 2026-01-21 14:25:54.460736234 +0000 UTC m=+0.559782041 container start d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 09:25:54 np0005590528 podman[257016]: 2026-01-21 14:25:54.465316956 +0000 UTC m=+0.564362783 container attach d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 21 09:25:54 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:25:54 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:25:54 np0005590528 nova_compute[239261]: 2026-01-21 14:25:54.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:25:54 np0005590528 nova_compute[239261]: 2026-01-21 14:25:54.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 21 09:25:54 np0005590528 nova_compute[239261]: 2026-01-21 14:25:54.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 21 09:25:54 np0005590528 nova_compute[239261]: 2026-01-21 14:25:54.740 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:25:54 np0005590528 nova_compute[239261]: 2026-01-21 14:25:54.740 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:25:54 np0005590528 nova_compute[239261]: 2026-01-21 14:25:54.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:25:54 np0005590528 focused_liskov[257032]: {
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:    "0": [
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:        {
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "devices": [
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "/dev/loop3"
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            ],
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_name": "ceph_lv0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_size": "21470642176",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "name": "ceph_lv0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "tags": {
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.cluster_name": "ceph",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.crush_device_class": "",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.encrypted": "0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.objectstore": "bluestore",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.osd_id": "0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.type": "block",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.vdo": "0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.with_tpm": "0"
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            },
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "type": "block",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "vg_name": "ceph_vg0"
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:        }
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:    ],
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:    "1": [
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:        {
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "devices": [
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "/dev/loop4"
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            ],
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_name": "ceph_lv1",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_size": "21470642176",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "name": "ceph_lv1",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "tags": {
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.cluster_name": "ceph",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.crush_device_class": "",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.encrypted": "0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.objectstore": "bluestore",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.osd_id": "1",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.type": "block",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.vdo": "0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.with_tpm": "0"
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            },
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "type": "block",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "vg_name": "ceph_vg1"
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:        }
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:    ],
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:    "2": [
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:        {
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "devices": [
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "/dev/loop5"
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            ],
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_name": "ceph_lv2",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_size": "21470642176",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "name": "ceph_lv2",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "tags": {
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.cluster_name": "ceph",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.crush_device_class": "",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.encrypted": "0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.objectstore": "bluestore",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.osd_id": "2",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.type": "block",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.vdo": "0",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:                "ceph.with_tpm": "0"
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            },
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "type": "block",
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:            "vg_name": "ceph_vg2"
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:        }
Jan 21 09:25:54 np0005590528 focused_liskov[257032]:    ]
Jan 21 09:25:54 np0005590528 focused_liskov[257032]: }
Jan 21 09:25:54 np0005590528 systemd[1]: libpod-d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932.scope: Deactivated successfully.
Jan 21 09:25:54 np0005590528 podman[257016]: 2026-01-21 14:25:54.788890303 +0000 UTC m=+0.887936130 container died d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 09:25:54 np0005590528 systemd[1]: var-lib-containers-storage-overlay-b4776ea17d4be847d358f972ab6dd510bac4cbed03d20bc5e056574d2314ea2c-merged.mount: Deactivated successfully.
Jan 21 09:25:54 np0005590528 podman[257016]: 2026-01-21 14:25:54.831295584 +0000 UTC m=+0.930341391 container remove d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:25:54 np0005590528 systemd[1]: libpod-conmon-d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932.scope: Deactivated successfully.
Jan 21 09:25:55 np0005590528 podman[257114]: 2026-01-21 14:25:55.306633161 +0000 UTC m=+0.047618268 container create 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:25:55 np0005590528 systemd[1]: Started libpod-conmon-677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c.scope.
Jan 21 09:25:55 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:25:55 np0005590528 podman[257114]: 2026-01-21 14:25:55.378486269 +0000 UTC m=+0.119471396 container init 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:25:55 np0005590528 podman[257114]: 2026-01-21 14:25:55.286696527 +0000 UTC m=+0.027681654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:25:55 np0005590528 podman[257114]: 2026-01-21 14:25:55.385855718 +0000 UTC m=+0.126840835 container start 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 09:25:55 np0005590528 podman[257114]: 2026-01-21 14:25:55.389778234 +0000 UTC m=+0.130763371 container attach 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 21 09:25:55 np0005590528 elegant_galileo[257130]: 167 167
Jan 21 09:25:55 np0005590528 systemd[1]: libpod-677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c.scope: Deactivated successfully.
Jan 21 09:25:55 np0005590528 conmon[257130]: conmon 677ec88b2bad3cee6012 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c.scope/container/memory.events
Jan 21 09:25:55 np0005590528 podman[257114]: 2026-01-21 14:25:55.392291615 +0000 UTC m=+0.133276722 container died 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 09:25:55 np0005590528 systemd[1]: var-lib-containers-storage-overlay-c464049208fd11ed77a30787764bcaf3c260a0a99334443f8166bc599a4f2f11-merged.mount: Deactivated successfully.
Jan 21 09:25:55 np0005590528 podman[257114]: 2026-01-21 14:25:55.432020201 +0000 UTC m=+0.173005308 container remove 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 09:25:55 np0005590528 systemd[1]: libpod-conmon-677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c.scope: Deactivated successfully.
Jan 21 09:25:55 np0005590528 podman[257133]: 2026-01-21 14:25:55.482668622 +0000 UTC m=+0.099659894 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 21 09:25:55 np0005590528 podman[257142]: 2026-01-21 14:25:55.512441546 +0000 UTC m=+0.089381344 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 21 09:25:55 np0005590528 podman[257195]: 2026-01-21 14:25:55.589229453 +0000 UTC m=+0.037592835 container create 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 09:25:55 np0005590528 systemd[1]: Started libpod-conmon-5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa.scope.
Jan 21 09:25:55 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:25:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4d6f27e176b21e4fb557651a784060fcfe6c61aa602663c2849ca9398b5960/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4d6f27e176b21e4fb557651a784060fcfe6c61aa602663c2849ca9398b5960/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4d6f27e176b21e4fb557651a784060fcfe6c61aa602663c2849ca9398b5960/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:55 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4d6f27e176b21e4fb557651a784060fcfe6c61aa602663c2849ca9398b5960/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:25:55 np0005590528 podman[257195]: 2026-01-21 14:25:55.574438064 +0000 UTC m=+0.022801476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:25:55 np0005590528 podman[257195]: 2026-01-21 14:25:55.673487851 +0000 UTC m=+0.121851253 container init 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 09:25:55 np0005590528 podman[257195]: 2026-01-21 14:25:55.679466728 +0000 UTC m=+0.127830110 container start 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 09:25:55 np0005590528 podman[257195]: 2026-01-21 14:25:55.685082134 +0000 UTC m=+0.133445526 container attach 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:25:55 np0005590528 nova_compute[239261]: 2026-01-21 14:25:55.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:25:55 np0005590528 nova_compute[239261]: 2026-01-21 14:25:55.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:25:55 np0005590528 nova_compute[239261]: 2026-01-21 14:25:55.749 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:25:55 np0005590528 nova_compute[239261]: 2026-01-21 14:25:55.782 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:25:55 np0005590528 nova_compute[239261]: 2026-01-21 14:25:55.783 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:25:55 np0005590528 nova_compute[239261]: 2026-01-21 14:25:55.783 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:25:55 np0005590528 nova_compute[239261]: 2026-01-21 14:25:55.783 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:25:55 np0005590528 nova_compute[239261]: 2026-01-21 14:25:55.784 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:25:56 np0005590528 lvm[257311]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:25:56 np0005590528 lvm[257311]: VG ceph_vg1 finished
Jan 21 09:25:56 np0005590528 lvm[257310]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:25:56 np0005590528 lvm[257310]: VG ceph_vg0 finished
Jan 21 09:25:56 np0005590528 lvm[257313]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:25:56 np0005590528 lvm[257313]: VG ceph_vg2 finished
Jan 21 09:25:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:25:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2568593483' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:25:56 np0005590528 lvm[257314]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:25:56 np0005590528 lvm[257314]: VG ceph_vg1 finished
Jan 21 09:25:56 np0005590528 nova_compute[239261]: 2026-01-21 14:25:56.430 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:25:56 np0005590528 fervent_yonath[257212]: {}
Jan 21 09:25:56 np0005590528 systemd[1]: libpod-5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa.scope: Deactivated successfully.
Jan 21 09:25:56 np0005590528 systemd[1]: libpod-5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa.scope: Consumed 1.323s CPU time.
Jan 21 09:25:56 np0005590528 podman[257195]: 2026-01-21 14:25:56.503835122 +0000 UTC m=+0.952198514 container died 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:25:56 np0005590528 systemd[1]: var-lib-containers-storage-overlay-dc4d6f27e176b21e4fb557651a784060fcfe6c61aa602663c2849ca9398b5960-merged.mount: Deactivated successfully.
Jan 21 09:25:56 np0005590528 podman[257195]: 2026-01-21 14:25:56.559535706 +0000 UTC m=+1.007899088 container remove 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 09:25:56 np0005590528 systemd[1]: libpod-conmon-5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa.scope: Deactivated successfully.
Jan 21 09:25:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:25:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:25:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:25:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:25:56 np0005590528 nova_compute[239261]: 2026-01-21 14:25:56.658 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:25:56 np0005590528 nova_compute[239261]: 2026-01-21 14:25:56.660 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4936MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:25:56 np0005590528 nova_compute[239261]: 2026-01-21 14:25:56.660 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:25:56 np0005590528 nova_compute[239261]: 2026-01-21 14:25:56.661 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:25:56 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:25:56 np0005590528 nova_compute[239261]: 2026-01-21 14:25:56.750 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:25:56 np0005590528 nova_compute[239261]: 2026-01-21 14:25:56.750 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:25:56 np0005590528 nova_compute[239261]: 2026-01-21 14:25:56.773 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:25:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:25:56 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:25:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:25:57 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141909906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:25:57 np0005590528 nova_compute[239261]: 2026-01-21 14:25:57.334 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:25:57 np0005590528 nova_compute[239261]: 2026-01-21 14:25:57.341 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:25:57 np0005590528 nova_compute[239261]: 2026-01-21 14:25:57.365 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:25:57 np0005590528 nova_compute[239261]: 2026-01-21 14:25:57.366 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:25:57 np0005590528 nova_compute[239261]: 2026-01-21 14:25:57.366 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:25:58 np0005590528 nova_compute[239261]: 2026-01-21 14:25:58.342 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:25:58 np0005590528 nova_compute[239261]: 2026-01-21 14:25:58.342 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:25:58 np0005590528 nova_compute[239261]: 2026-01-21 14:25:58.343 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:25:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:25:58 np0005590528 nova_compute[239261]: 2026-01-21 14:25:58.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:25:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:01 np0005590528 nova_compute[239261]: 2026-01-21 14:26:01.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:26:02 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:04 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:04 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:06 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:08 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:09 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:10 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:26:11 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:26:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:26:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:26:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:26:12 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:26:12 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:14 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:14 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:16 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:18 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:19 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:20 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:22 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 09:26:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2021888656' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 09:26:23 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 09:26:23 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2021888656' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 09:26:24 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:24 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:26 np0005590528 podman[257380]: 2026-01-21 14:26:26.329469603 +0000 UTC m=+0.056254398 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 09:26:26 np0005590528 podman[257379]: 2026-01-21 14:26:26.385434624 +0000 UTC m=+0.112277371 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 21 09:26:26 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:27 np0005590528 systemd-logind[780]: New session 52 of user zuul.
Jan 21 09:26:27 np0005590528 systemd[1]: Started Session 52 of User zuul.
Jan 21 09:26:28 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:29 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:30 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14558 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:30 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:30 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14560 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:31 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 21 09:26:31 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2092945474' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 21 09:26:32 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:26:33.919 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:26:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:26:33.921 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:26:33 np0005590528 ovn_metadata_agent[155169]: 2026-01-21 14:26:33.921 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:26:34 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:34 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 09:26:36 np0005590528 ovs-vsctl[257756]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 21 09:26:36 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 09:26:37 np0005590528 virtqemud[238983]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 21 09:26:37 np0005590528 virtqemud[238983]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 21 09:26:37 np0005590528 virtqemud[238983]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 21 09:26:38 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: cache status {prefix=cache status} (starting...)
Jan 21 09:26:38 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: client ls {prefix=client ls} (starting...)
Jan 21 09:26:38 np0005590528 lvm[258091]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 09:26:38 np0005590528 lvm[258091]: VG ceph_vg2 finished
Jan 21 09:26:38 np0005590528 lvm[258100]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 09:26:38 np0005590528 lvm[258100]: VG ceph_vg0 finished
Jan 21 09:26:38 np0005590528 lvm[258103]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 09:26:38 np0005590528 lvm[258103]: VG ceph_vg1 finished
Jan 21 09:26:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14564 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:38 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 09:26:38 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: damage ls {prefix=damage ls} (starting...)
Jan 21 09:26:38 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14566 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:38 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump loads {prefix=dump loads} (starting...)
Jan 21 09:26:39 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 21 09:26:39 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 21 09:26:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14568 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:39 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 21 09:26:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 21 09:26:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3304865984' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 21 09:26:39 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 21 09:26:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:26:39
Jan 21 09:26:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 09:26:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 09:26:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.meta', '.mgr', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 21 09:26:39 np0005590528 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 09:26:39 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14572 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:39 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:26:39.792+0000 7fc546f36640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 21 09:26:39 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 21 09:26:39 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 21 09:26:39 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:26:39 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2274070401' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:26:39 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 21 09:26:40 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: ops {prefix=ops} (starting...)
Jan 21 09:26:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 21 09:26:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885433264' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 21 09:26:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 21 09:26:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328399773' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 21 09:26:40 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 09:26:40 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session ls {prefix=session ls} (starting...)
Jan 21 09:26:40 np0005590528 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: status {prefix=status} (starting...)
Jan 21 09:26:40 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 21 09:26:40 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3383166530' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50a4a37f0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50adb2be0>)]
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:26:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 21 09:26:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484631162' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 09:26:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 21 09:26:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1990766666' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14586 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:41 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14590 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:41 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 21 09:26:41 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958258899' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 21 09:26:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:26:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 09:26:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 09:26:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50ad37ac0>)]
Jan 21 09:26:42 np0005590528 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 09:26:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 21 09:26:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606940983' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 21 09:26:42 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 09:26:42 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434642654' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 21 09:26:42 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 09:26:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 21 09:26:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1361236892' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 21 09:26:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 21 09:26:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588299280' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 21 09:26:43 np0005590528 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.tnwklj(active, since 42m)
Jan 21 09:26:43 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14600 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:43 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:26:43.518+0000 7fc546f36640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 21 09:26:43 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 21 09:26:43 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 21 09:26:43 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263492334' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 21 09:26:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 21 09:26:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666518825' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 21 09:26:44 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14606 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 49152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 49152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 40960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 40960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 16384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 16384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 8192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 8192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1015808 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1015808 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 983040 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 983040 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 974848 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 974848 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 974848 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 966656 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 966656 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 958464 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 958464 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 958464 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 950272 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 950272 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 950272 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 942080 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 942080 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 942080 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 925696 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 925696 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 917504 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 917504 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 909312 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 909312 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 909312 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 901120 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 901120 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 901120 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 737280 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 737280 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 737280 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 729088 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5492 writes, 23K keys, 5492 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5492 writes, 812 syncs, 6.76 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5492 writes, 23K keys, 5492 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s#012Interval WAL: 5492 writes, 812 syncs, 6.76 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 565248 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 565248 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 565248 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 540672 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 540672 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 540672 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 532480 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 532480 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 524288 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 524288 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 524288 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 507904 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 507904 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 507904 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 483328 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 483328 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 483328 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 450560 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 442368 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 442368 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 442368 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 434176 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 271.400299072s of 271.403503418s, submitted: 2
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935662 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 434176 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [0,0,1])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 286720 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 1277952 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1130496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1130496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1114112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1114112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1114112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1097728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1097728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1097728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1089536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1089536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1089536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1064960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1064960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1064960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1056768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1056768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1040384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1040384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 983040 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 983040 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 983040 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 983040 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 983040 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 917504 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 917504 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 917504 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc ms_handle_reset ms_handle_reset con 0x557951a20000
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2882926037
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_configure stats_period=5
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 298.759918213s of 300.480377197s, submitted: 90
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 311296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 204800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 204800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 204800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14609 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread fragmentation_score=0.000135 took=0.000023s
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5720 writes, 24K keys, 5720 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5720 writes, 926 syncs, 6.18 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.046539307s of 300.132598877s, submitted: 24
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 720896 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 198.138137817s of 198.749801636s, submitted: 90
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 393216 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 335872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 125 ms_handle_reset con 0x55795377ec00 session 0x557951772540
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fc6b1000/0x0/0x4ffc00000, data 0x8bb4b8/0x979000, compress 0x0/0x0/0x0, omap 0xc572, meta 0x2bc3a8e), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 16908288 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 16908288 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992128 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 16875520 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 16875520 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16850944 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16850944 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fc6ab000/0x0/0x4ffc00000, data 0x8bd093/0x97d000, compress 0x0/0x0/0x0, omap 0xe84a, meta 0x2bc17b6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 16678912 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1056048 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25026560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 126 ms_handle_reset con 0x55795244d000 session 0x557951f8ac40
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25010176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25010176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x152ec6e/0x15f1000, compress 0x0/0x0/0x0, omap 0xe95c, meta 0x2bc16a4), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x152ec6e/0x15f1000, compress 0x0/0x0/0x0, omap 0xe95c, meta 0x2bc16a4), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.134987831s of 12.674633980s, submitted: 36
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063912 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063912 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063912 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063912 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.509849548s of 21.520481110s, submitted: 13
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065604 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 24846336 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 12
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 24805376 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 24805376 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068268 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x15308be/0x15f7000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x15308be/0x15f7000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 13
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 24551424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067262 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.059196472s of 10.092103004s, submitted: 4
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x15308be/0x15f7000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x15308be/0x15f7000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067390 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 24518656 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065698 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 24518656 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.215215683s of 10.411604881s, submitted: 5
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067390 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066672 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066672 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066672 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066672 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.363595963s of 25.367301941s, submitted: 2
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba38000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068602 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x15322f2/0x15f7000, compress 0x0/0x0/0x0, omap 0xea0d, meta 0x2bc15f3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x15322f2/0x15f7000, compress 0x0/0x0/0x0, omap 0xea0d, meta 0x2bc15f3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068602 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.925289154s of 11.006295204s, submitted: 26
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x15322f2/0x15f7000, compress 0x0/0x0/0x0, omap 0xea0d, meta 0x2bc15f3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071376 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1533d71/0x15fa000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071376 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073068 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073068 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073068 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.620378494s of 25.698316574s, submitted: 15
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 24698880 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074040 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1533ea7/0x15fc000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1533ea7/0x15fc000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077246 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.283190727s of 12.332059860s, submitted: 24
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fba2c000/0x0/0x4ffc00000, data 0x1535a11/0x15fe000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076656 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fba2c000/0x0/0x4ffc00000, data 0x1535a11/0x15fe000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 24666112 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba25000/0x0/0x4ffc00000, data 0x153902a/0x1603000, compress 0x0/0x0/0x0, omap 0xebd5, meta 0x2bc142b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082078 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 24633344 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 24633344 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 426 B/s wr, 60 op/s
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 24633344 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 24633344 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 24633344 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.215364456s of 10.467832565s, submitted: 106
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088854 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba21000/0x0/0x4ffc00000, data 0x153c894/0x1609000, compress 0x0/0x0/0x0, omap 0xec39, meta 0x2bc13c7), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 23568384 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 23568384 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 23568384 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba20000/0x0/0x4ffc00000, data 0x153c894/0x1609000, compress 0x0/0x0/0x0, omap 0xec39, meta 0x2bc13c7), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 23560192 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba20000/0x0/0x4ffc00000, data 0x153c894/0x1609000, compress 0x0/0x0/0x0, omap 0xec39, meta 0x2bc13c7), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093174 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x153ff68/0x160f000, compress 0x0/0x0/0x0, omap 0x111aa, meta 0x2bbee56), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x153ff68/0x160f000, compress 0x0/0x0/0x0, omap 0x111aa, meta 0x2bbee56), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094866 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1540003/0x1610000, compress 0x0/0x0/0x0, omap 0x111aa, meta 0x2bbee56), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.778944016s of 13.841563225s, submitted: 39
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096442 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096442 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097414 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541aa2/0x1613000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541aa2/0x1613000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097414 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.695030212s of 16.724678040s, submitted: 14
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096696 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 23470080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541aa2/0x1613000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 23470080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097414 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 23470080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 23470080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.763314247s of 11.776571274s, submitted: 2
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099106 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541b3d/0x1614000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541b3d/0x1614000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099920 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541b3d/0x1614000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.232583046s of 11.239578247s, submitted: 3
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098228 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541aa2/0x1613000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba14000/0x0/0x4ffc00000, data 0x15436a7/0x1616000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101722 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba14000/0x0/0x4ffc00000, data 0x15436a7/0x1616000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104496 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba11000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.116888046s of 16.173322678s, submitted: 36
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103906 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x154508b/0x1618000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x154508b/0x1618000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 23412736 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 23412736 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104878 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba13000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba13000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba13000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104878 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.094568253s of 12.111476898s, submitted: 2
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 23085056 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 14
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x154523b/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x154523b/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107264 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107264 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.934212685s of 11.958757401s, submitted: 4
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107982 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107982 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107982 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.219187737s of 13.221732140s, submitted: 1
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 22978560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 22978560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 22978560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba11000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 139 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111332 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114106 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba0a000/0x0/0x4ffc00000, data 0x1548845/0x1620000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114106 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba0a000/0x0/0x4ffc00000, data 0x1548845/0x1620000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.585231781s of 16.662071228s, submitted: 63
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba0a000/0x0/0x4ffc00000, data 0x1548845/0x1620000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113386 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba0c000/0x0/0x4ffc00000, data 0x1548845/0x1620000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba0c000/0x0/0x4ffc00000, data 0x1548845/0x1620000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116770 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 22937600 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 22937600 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba09000/0x0/0x4ffc00000, data 0x1548a16/0x1623000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 22913024 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 22904832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 22904832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117026 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.542133331s of 11.936735153s, submitted: 6
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154a3df/0x1622000, compress 0x0/0x0/0x0, omap 0x11709, meta 0x2bbe8f7), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116754 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154a3df/0x1622000, compress 0x0/0x0/0x0, omap 0x11709, meta 0x2bbe8f7), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 20832256 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 20832256 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119528 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119528 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.180488586s of 16.281837463s, submitted: 62
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 ms_handle_reset con 0x5579540e5400 session 0x557953f29a40
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 ms_handle_reset con 0x5579540e4800 session 0x557953edf880
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 20463616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 20463616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 15
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118218 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118218 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118218 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118218 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.185983658s of 20.225440979s, submitted: 181
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154bf19/0x1626000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121602 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba03000/0x0/0x4ffc00000, data 0x154c0ab/0x1628000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123980 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.403062820s of 10.532799721s, submitted: 14
Jan 21 09:26:44 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x154c00c/0x1627000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 20373504 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116309731' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121092 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122784 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.927226067s of 11.937694550s, submitted: 4
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 20348928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124476 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 20348928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 20348928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154bef3/0x1626000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 20348928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 20283392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 20283392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123630 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 20283392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 20250624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 20250624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 20242432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123742 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 6974 writes, 26K keys, 6974 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6974 writes, 1420 syncs, 4.91 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1254 writes, 2803 keys, 1254 commit groups, 1.0 writes per commit group, ingest: 1.29 MB, 0.00 MB/s#012Interval WAL: 1254 writes, 494 syncs, 2.54 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123742 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.066503525s of 18.086311340s, submitted: 8
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154beac/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 20234240 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 20234240 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc ms_handle_reset ms_handle_reset con 0x5579519be400
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2882926037
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_configure stats_period=5
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123758 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154beac/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19849216 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19849216 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154beaa/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19849216 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19849216 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19849216 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123758 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19824640 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154beab/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19824640 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19824640 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.091238976s of 11.337059021s, submitted: 10
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19816448 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19816448 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bea9/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123024 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19816448 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19816448 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 19791872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 19791872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 19791872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123758 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 19791872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154beac/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154beaa/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.114414215s of 11.132976532s, submitted: 9
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122050 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123742 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123024 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123024 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.400957108s of 16.445089340s, submitted: 4
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154beab/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123742 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bea9/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 17268736 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 17006592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137196 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 16990208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.365338326s of 10.434735298s, submitted: 31
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 16924672 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb9be000/0x0/0x4ffc00000, data 0x1594c4d/0x166e000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 16785408 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 16646144 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 16646144 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135118 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 16318464 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb987000/0x0/0x4ffc00000, data 0x15ca4f1/0x16a5000, compress 0x0/0x0/0x0, omap 0x13d09, meta 0x2bbc2f7), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 16695296 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb960000/0x0/0x4ffc00000, data 0x15f1784/0x16cc000, compress 0x0/0x0/0x0, omap 0x13d09, meta 0x2bbc2f7), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 16883712 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0x1609201/0x16e3000, compress 0x0/0x0/0x0, omap 0x13d09, meta 0x2bbc2f7), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 14786560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 14786560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139438 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87179264 unmapped: 14540800 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa746000/0x0/0x4ffc00000, data 0x166b527/0x1746000, compress 0x0/0x0/0x0, omap 0x13d09, meta 0x3d5c2f7), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87236608 unmapped: 14483456 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.369460106s of 11.168671608s, submitted: 185
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87236608 unmapped: 14483456 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87203840 unmapped: 14516224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa741000/0x0/0x4ffc00000, data 0x166cfa6/0x1749000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 13983744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153996 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 13959168 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 13631488 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 13631488 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa6f3000/0x0/0x4ffc00000, data 0x16bbf5b/0x1799000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 13434880 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 12378112 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161404 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 11984896 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa683000/0x0/0x4ffc00000, data 0x172c203/0x1809000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 11984896 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa683000/0x0/0x4ffc00000, data 0x172c203/0x1809000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 11558912 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.060370445s of 10.557484627s, submitted: 64
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90308608 unmapped: 11411456 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 89890816 unmapped: 11829248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163144 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 11558912 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 11558912 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa639000/0x0/0x4ffc00000, data 0x17746d2/0x1853000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 89268224 unmapped: 12451840 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 16
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90636288 unmapped: 11083776 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90636288 unmapped: 11083776 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170104 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa5c5000/0x0/0x4ffc00000, data 0x17e8156/0x18c7000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 11067392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90939392 unmapped: 10780672 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90791936 unmapped: 10928128 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.014771461s of 10.000402451s, submitted: 67
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90791936 unmapped: 10928128 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90947584 unmapped: 10772480 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166478 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90947584 unmapped: 10772480 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa5a2000/0x0/0x4ffc00000, data 0x180bbf0/0x18ea000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90980352 unmapped: 10739712 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90980352 unmapped: 10739712 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa543000/0x0/0x4ffc00000, data 0x186b0bf/0x1949000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90988544 unmapped: 10731520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 9183232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180522 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 8634368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 9625600 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa4ed000/0x0/0x4ffc00000, data 0x18c1209/0x199f000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 9617408 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.556797028s of 10.128489494s, submitted: 52
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 9453568 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa49a000/0x0/0x4ffc00000, data 0x1913bc6/0x19f2000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92250112 unmapped: 9469952 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185870 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92250112 unmapped: 9469952 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 9142272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa45e000/0x0/0x4ffc00000, data 0x194f638/0x1a2d000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 9142272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 9142272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 9682944 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x1951365/0x1a32000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183232 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 9674752 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x1951365/0x1a32000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 9674752 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x1951365/0x1a32000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185702 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.222537041s of 12.803586960s, submitted: 66
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa455000/0x0/0x4ffc00000, data 0x1952ead/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa455000/0x0/0x4ffc00000, data 0x1952ead/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186674 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 9641984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 9641984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa456000/0x0/0x4ffc00000, data 0x1952eab/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa456000/0x0/0x4ffc00000, data 0x1952eab/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 9641984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 9641984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa456000/0x0/0x4ffc00000, data 0x1952eab/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 9641984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186978 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.812602997s of 11.090450287s, submitted: 13
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa454000/0x0/0x4ffc00000, data 0x1952d87/0x1a35000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186260 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa454000/0x0/0x4ffc00000, data 0x1952d87/0x1a35000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa454000/0x0/0x4ffc00000, data 0x1952d87/0x1a35000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188750 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa455000/0x0/0x4ffc00000, data 0x1952e50/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 9625600 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 9617408 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.992103577s of 12.011561394s, submitted: 9
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa456000/0x0/0x4ffc00000, data 0x1952d87/0x1a35000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188974 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 8577024 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 8577024 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187778 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa459000/0x0/0x4ffc00000, data 0x1952ce7/0x1a33000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 8577024 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa459000/0x0/0x4ffc00000, data 0x1952ce7/0x1a33000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x1952db0/0x1a34000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.971885681s of 11.010825157s, submitted: 15
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188162 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0x1952c4d/0x1a32000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189120 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0x1952ce9/0x1a33000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0x1952ce9/0x1a33000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189710 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.989832878s of 11.007907867s, submitted: 9
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 ms_handle_reset con 0x55795244d000 session 0x5579542f5c00
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 8257536 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952c4e/0x1a32000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 8257536 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 17
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952c4e/0x1a32000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189104 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952c4c/0x1a32000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [0,2])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190078 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.779275894s of 10.814191818s, submitted: 186
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952c4d/0x1a32000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189344 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952b86/0x1a31000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952b86/0x1a31000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952b86/0x1a31000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189344 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952b86/0x1a31000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.192089081s of 10.202366829s, submitted: 3
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45b000/0x0/0x4ffc00000, data 0x1952b86/0x1a31000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191274 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x19546f0/0x1a33000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93577216 unmapped: 8142848 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195036 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 8134656 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 8134656 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa454000/0x0/0x4ffc00000, data 0x19563dd/0x1a37000, compress 0x0/0x0/0x0, omap 0x13fa1, meta 0x3d5c05f), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.824344635s of 10.738523483s, submitted: 53
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 8110080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197794 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa452000/0x0/0x4ffc00000, data 0x1957e5a/0x1a3a000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa450000/0x0/0x4ffc00000, data 0x1957f23/0x1a3b000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201050 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa450000/0x0/0x4ffc00000, data 0x1957ef6/0x1a3b000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa450000/0x0/0x4ffc00000, data 0x1957ef6/0x1a3b000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.247500420s of 12.277046204s, submitted: 21
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa450000/0x0/0x4ffc00000, data 0x1957ef7/0x1a3b000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200316 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa44f000/0x0/0x4ffc00000, data 0x1957f92/0x1a3c000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203404 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa450000/0x0/0x4ffc00000, data 0x1957f90/0x1a3c000, compress 0x0/0x0/0x0, omap 0x14187, meta 0x3d5be79), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa44c000/0x0/0x4ffc00000, data 0x1959b98/0x1a3f000, compress 0x0/0x0/0x0, omap 0x1420b, meta 0x3d5bdf5), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206194 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.955485344s of 13.034737587s, submitted: 35
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa44c000/0x0/0x4ffc00000, data 0x1959b98/0x1a3f000, compress 0x0/0x0/0x0, omap 0x1420b, meta 0x3d5bdf5), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa44d000/0x0/0x4ffc00000, data 0x1959b98/0x1a3f000, compress 0x0/0x0/0x0, omap 0x1420b, meta 0x3d5bdf5), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207136 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 7028736 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fa447000/0x0/0x4ffc00000, data 0x195b66c/0x1a42000, compress 0x0/0x0/0x0, omap 0x1424e, meta 0x3d5bdb2), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 7028736 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa444000/0x0/0x4ffc00000, data 0x195d30c/0x1a46000, compress 0x0/0x0/0x0, omap 0x142d2, meta 0x3d5bd2e), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 7012352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214952 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 7004160 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 7004160 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 7004160 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.878732681s of 10.947863579s, submitted: 47
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 6987776 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa441000/0x0/0x4ffc00000, data 0x195eebc/0x1a49000, compress 0x0/0x0/0x0, omap 0x142d2, meta 0x3d5bd2e), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 6987776 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217006 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 6987776 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa443000/0x0/0x4ffc00000, data 0x195eeba/0x1a49000, compress 0x0/0x0/0x0, omap 0x142d2, meta 0x3d5bd2e), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 6979584 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 6979584 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 6979584 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 154 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6971392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa440000/0x0/0x4ffc00000, data 0x19607d7/0x1a4a000, compress 0x0/0x0/0x0, omap 0x15ef4, meta 0x3d5a10c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219176 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6971392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6971392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6971392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa440000/0x0/0x4ffc00000, data 0x19607d7/0x1a4a000, compress 0x0/0x0/0x0, omap 0x15ef4, meta 0x3d5a10c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5914624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5914624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221950 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5914624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x19623dc/0x1a4d000, compress 0x0/0x0/0x0, omap 0x161d2, meta 0x3d59e2e), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5914624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5914624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.662895203s of 14.452901840s, submitted: 66
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x19623dc/0x1a4d000, compress 0x0/0x0/0x0, omap 0x161d2, meta 0x3d59e2e), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224724 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1963e5b/0x1a50000, compress 0x0/0x0/0x0, omap 0x164a6, meta 0x3d59b5a), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226416 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1963ef6/0x1a51000, compress 0x0/0x0/0x0, omap 0x164a6, meta 0x3d59b5a), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.574105263s of 10.472013474s, submitted: 15
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa435000/0x0/0x4ffc00000, data 0x1965b96/0x1a55000, compress 0x0/0x0/0x0, omap 0x16787, meta 0x3d59879), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95821824 unmapped: 5898240 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95862784 unmapped: 5857280 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232158 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95870976 unmapped: 5849088 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95879168 unmapped: 5840896 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95879168 unmapped: 5840896 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95879168 unmapped: 5840896 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa433000/0x0/0x4ffc00000, data 0x19675fa/0x1a55000, compress 0x0/0x0/0x0, omap 0x16a8e, meta 0x3d59572), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233878 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa432000/0x0/0x4ffc00000, data 0x19690a5/0x1a58000, compress 0x0/0x0/0x0, omap 0x16b14, meta 0x3d594ec), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa432000/0x0/0x4ffc00000, data 0x19690a5/0x1a58000, compress 0x0/0x0/0x0, omap 0x16b14, meta 0x3d594ec), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.637742043s of 10.431279182s, submitted: 62
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.334789276s of 36.344741821s, submitted: 12
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238344 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42e000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42e000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239316 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ac7a/0x1a5d000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240864 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42e000/0x0/0x4ffc00000, data 0x196ad15/0x1a5e000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240864 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42e000/0x0/0x4ffc00000, data 0x196ad15/0x1a5e000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.415119171s of 19.423789978s, submitted: 3
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240720 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42e000/0x0/0x4ffc00000, data 0x196ad15/0x1a5e000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239028 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ac7a/0x1a5d000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.201059341s of 10.209357262s, submitted: 4
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.711536407s of 20.883802414s, submitted: 1
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239540 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 18
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196ac3a/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95895552 unmapped: 5824512 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95903744 unmapped: 5816320 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 19
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ad4c/0x1a5d000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 20
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 21
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 67.852340698s of 67.861358643s, submitted: 4
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239540 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239540 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.169133186s of 14.178491592s, submitted: 4
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 ms_handle_reset con 0x5579540e4400 session 0x557954300700
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 ms_handle_reset con 0x5579540e4c00 session 0x5579542ecfc0
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96157696 unmapped: 5562368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 22
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239540 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238822 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.828279495s of 13.854784966s, submitted: 181
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240514 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x196c749/0x1a5e000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242300 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.358694077s of 12.426469803s, submitted: 27
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x196c749/0x1a5e000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.632019043s of 39.640956879s, submitted: 54
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa428000/0x0/0x4ffc00000, data 0x196e263/0x1a62000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa428000/0x0/0x4ffc00000, data 0x196e263/0x1a62000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246766 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa428000/0x0/0x4ffc00000, data 0x196e263/0x1a62000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245328 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 163 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x196fdcd/0x1a64000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247848 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.221067429s of 16.275087357s, submitted: 25
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.478794098s of 38.538433075s, submitted: 13
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x1973451/0x1a6a000, compress 0x0/0x0/0x0, omap 0x16ec1, meta 0x3d5913f), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253396 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x1973451/0x1a6a000, compress 0x0/0x0/0x0, omap 0x16ec1, meta 0x3d5913f), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa41f000/0x0/0x4ffc00000, data 0x19734ec/0x1a6b000, compress 0x0/0x0/0x0, omap 0x16ec1, meta 0x3d5913f), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255088 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.299714088s of 10.686847687s, submitted: 37
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257144 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa41c000/0x0/0x4ffc00000, data 0x1974ed0/0x1a6d000, compress 0x0/0x0/0x0, omap 0x16f45, meta 0x3d590bb), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa41a000/0x0/0x4ffc00000, data 0x1976ad5/0x1a70000, compress 0x0/0x0/0x0, omap 0x16fd9, meta 0x3d59027), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258944 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.059199333s of 11.116022110s, submitted: 38
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261718 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261718 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261718 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261718 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 169 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.993268967s of 19.002677917s, submitted: 13
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264492 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fa414000/0x0/0x4ffc00000, data 0x197a159/0x1a76000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264492 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.663864136s of 11.002883911s, submitted: 24
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa414000/0x0/0x4ffc00000, data 0x197a159/0x1a76000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 9322 writes, 33K keys, 9322 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9322 writes, 2294 syncs, 4.06 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2348 writes, 6511 keys, 2348 commit groups, 1.0 writes per commit group, ingest: 6.88 MB, 0.01 MB/s#012Interval WAL: 2348 writes, 874 syncs, 2.69 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 55.488452911s of 55.496517181s, submitted: 12
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266618 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97083392 unmapped: 4636672 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 4620288 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 4620288 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 4620288 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266546 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266546 data_alloc: 218103808 data_used: 5778
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 4562944 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: do_command 'config diff' '{prefix=config diff}'
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: do_command 'config show' '{prefix=config show}'
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: do_command 'counter dump' '{prefix=counter dump}'
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: do_command 'counter schema' '{prefix=counter schema}'
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.076944351s of 12.270958900s, submitted: 90
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: osd.2 171 ms_handle_reset con 0x5579543f6400 session 0x5579539b9c00
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 4038656 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Got map version 23
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 3809280 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:44 np0005590528 ceph-osd[87843]: do_command 'log dump' '{prefix=log dump}'
Jan 21 09:26:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} v 0)
Jan 21 09:26:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 09:26:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 21 09:26:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009789553' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 21 09:26:45 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14618 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} v 0)
Jan 21 09:26:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 09:26:45 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 21 09:26:45 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510436890' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 21 09:26:46 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 21 09:26:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097060634' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 21 09:26:46 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 09:26:46 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:46 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 09:26:46 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951054109' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 21 09:26:47 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:47 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 21 09:26:47 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358713181' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 21 09:26:47 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14632 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 09:26:48 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 21 09:26:48 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2102807065' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 21 09:26:48 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14636 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 09:26:48 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 09:26:48 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14640 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 09:26:49 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 21 09:26:49 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1482148430' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 21 09:26:49 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14644 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 09:26:49 np0005590528 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 21 09:26:49 np0005590528 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:26:49.303+0000 7fc546f36640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 1556480 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 1556480 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 1556480 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 1548288 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 1548288 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 1540096 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 1540096 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 1531904 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 1531904 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 1531904 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 1523712 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 1523712 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 1515520 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 1515520 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 1507328 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 1507328 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1499136 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1490944 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1490944 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1482752 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1482752 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1482752 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 1474560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 1474560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1466368 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1466368 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1458176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1458176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1458176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1458176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1449984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1449984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 1441792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 1441792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1433600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1433600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1433600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 1425408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 1425408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1417216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1417216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1417216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1409024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1409024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1400832 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1392640 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1392640 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1376256 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1376256 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 1368064 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 1368064 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 1359872 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 1359872 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 1359872 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 1351680 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 1351680 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 1343488 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 1343488 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1335296 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1335296 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1335296 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1335296 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 1327104 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 1327104 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 1318912 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 1310720 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 1302528 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 1302528 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 1302528 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 1294336 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 1294336 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 1286144 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 1286144 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 1277952 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 1277952 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 1277952 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 1269760 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 1269760 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 1261568 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 1261568 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 1253376 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 1253376 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 1245184 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 1236992 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 1236992 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1228800 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1228800 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1228800 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 1220608 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 1220608 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 1212416 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 1212416 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 1204224 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 1204224 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 1204224 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1196032 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1196032 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1196032 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 6984 writes, 28K keys, 6984 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6984 writes, 1319 syncs, 5.29 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6984 writes, 28K keys, 6984 commit groups, 1.0 writes per commit group, ingest: 19.80 MB, 0.03 MB/s#012Interval WAL: 6984 writes, 1319 syncs, 5.29 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 1114112 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 1114112 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1105920 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1105920 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1097728 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1097728 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1089536 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1089536 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1089536 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1073152 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1073152 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 1064960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 1064960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 1064960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 1048576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 1048576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1040384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1040384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1040384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 1032192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 1032192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 1007616 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 1007616 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 999424 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 999424 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 255.080078125s of 255.664642334s, submitted: 8
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 892928 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 892928 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 868352 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 868352 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 868352 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 860160 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 860160 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 851968 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 851968 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 835584 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 835584 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 835584 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 811008 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 811008 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 811008 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 802816 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 802816 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 802816 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: mgrc ms_handle_reset ms_handle_reset con 0x562353502000
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2882926037
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: mgrc handle_mgr_configure stats_period=5
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 ms_handle_reset con 0x562353ab1000 session 0x562353b12e00
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 298.569305420s of 299.268676758s, submitted: 90
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 409600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 409600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 409600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 409600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread fragmentation_score=0.000121 took=0.000016s
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 7208 writes, 29K keys, 7208 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7208 writes, 1431 syncs, 5.04 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.002990723s of 300.343963623s, submitted: 22
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 368640 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:49 np0005590528 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 09:26:56 np0005590528 podman[260271]: 2026-01-21 14:26:56.902352567 +0000 UTC m=+0.090015557 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 09:26:56 np0005590528 podman[260270]: 2026-01-21 14:26:56.926852212 +0000 UTC m=+0.113359354 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:26:56 np0005590528 nova_compute[239261]: 2026-01-21 14:26:56.930 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 21 09:26:56 np0005590528 nova_compute[239261]: 2026-01-21 14:26:56.931 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:26:56 np0005590528 nova_compute[239261]: 2026-01-21 14:26:56.932 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:26:56 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 21 09:26:56 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/262221715' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 21 09:26:57 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14692 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 09:26:57 np0005590528 systemd[1]: Starting Hostname Service...
Jan 21 09:26:57 np0005590528 rsyslogd[1002]: imjournal: 16323 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 21 09:26:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 09:26:57 np0005590528 systemd[1]: Started Hostname Service.
Jan 21 09:26:57 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:26:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 09:26:57 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:26:57 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 21 09:26:57 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470918179' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 21 09:26:57 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 09:26:57 np0005590528 nova_compute[239261]: 2026-01-21 14:26:57.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:26:57 np0005590528 nova_compute[239261]: 2026-01-21 14:26:57.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 21 09:26:57 np0005590528 nova_compute[239261]: 2026-01-21 14:26:57.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:26:57 np0005590528 nova_compute[239261]: 2026-01-21 14:26:57.765 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:26:57 np0005590528 nova_compute[239261]: 2026-01-21 14:26:57.765 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:26:57 np0005590528 nova_compute[239261]: 2026-01-21 14:26:57.766 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:26:57 np0005590528 nova_compute[239261]: 2026-01-21 14:26:57.766 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 21 09:26:57 np0005590528 nova_compute[239261]: 2026-01-21 14:26:57.766 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915015450' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 21 09:26:58 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14700 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/851145430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:26:58 np0005590528 nova_compute[239261]: 2026-01-21 14:26:58.389 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:26:58 np0005590528 nova_compute[239261]: 2026-01-21 14:26:58.534 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 21 09:26:58 np0005590528 nova_compute[239261]: 2026-01-21 14:26:58.535 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4724MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 21 09:26:58 np0005590528 nova_compute[239261]: 2026-01-21 14:26:58.536 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 21 09:26:58 np0005590528 nova_compute[239261]: 2026-01-21 14:26:58.536 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/20496570' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 21 09:26:58 np0005590528 nova_compute[239261]: 2026-01-21 14:26:58.657 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 21 09:26:58 np0005590528 nova_compute[239261]: 2026-01-21 14:26:58.657 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 21 09:26:58 np0005590528 nova_compute[239261]: 2026-01-21 14:26:58.684 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 21 09:26:58 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 21 09:26:58 np0005590528 podman[260772]: 2026-01-21 14:26:58.726823432 +0000 UTC m=+0.079919823 container create bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 09:26:58 np0005590528 podman[260772]: 2026-01-21 14:26:58.675966656 +0000 UTC m=+0.029063067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:26:58 np0005590528 systemd[1]: Started libpod-conmon-bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088.scope.
Jan 21 09:26:58 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 21 09:26:58 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 21 09:26:59 np0005590528 podman[260772]: 2026-01-21 14:26:59.023438758 +0000 UTC m=+0.376535219 container init bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 09:26:59 np0005590528 podman[260772]: 2026-01-21 14:26:59.032505338 +0000 UTC m=+0.385601739 container start bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 09:26:59 np0005590528 great_banzai[260861]: 167 167
Jan 21 09:26:59 np0005590528 systemd[1]: libpod-bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088.scope: Deactivated successfully.
Jan 21 09:26:59 np0005590528 podman[260772]: 2026-01-21 14:26:59.040074912 +0000 UTC m=+0.393171303 container attach bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 09:26:59 np0005590528 podman[260772]: 2026-01-21 14:26:59.040881611 +0000 UTC m=+0.393978002 container died bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:26:59 np0005590528 systemd[1]: var-lib-containers-storage-overlay-961fe6d7b33d6a03a2f75484dd98d63dbfc7044dc7c645da7f1a232e13ebc0f8-merged.mount: Deactivated successfully.
Jan 21 09:26:59 np0005590528 podman[260772]: 2026-01-21 14:26:59.109225802 +0000 UTC m=+0.462322193 container remove bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:26:59 np0005590528 systemd[1]: libpod-conmon-bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088.scope: Deactivated successfully.
Jan 21 09:26:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 09:26:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110672913' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 09:26:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 21 09:26:59 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611037400' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 21 09:26:59 np0005590528 nova_compute[239261]: 2026-01-21 14:26:59.287 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 21 09:26:59 np0005590528 nova_compute[239261]: 2026-01-21 14:26:59.293 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 21 09:26:59 np0005590528 podman[260936]: 2026-01-21 14:26:59.305041149 +0000 UTC m=+0.063870743 container create 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 09:26:59 np0005590528 nova_compute[239261]: 2026-01-21 14:26:59.313 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 21 09:26:59 np0005590528 nova_compute[239261]: 2026-01-21 14:26:59.314 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 21 09:26:59 np0005590528 nova_compute[239261]: 2026-01-21 14:26:59.314 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 21 09:26:59 np0005590528 systemd[1]: Started libpod-conmon-5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6.scope.
Jan 21 09:26:59 np0005590528 podman[260936]: 2026-01-21 14:26:59.27791536 +0000 UTC m=+0.036744984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:26:59 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:26:59 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:26:59 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:26:59 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:26:59 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:26:59 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 09:26:59 np0005590528 podman[260936]: 2026-01-21 14:26:59.406234767 +0000 UTC m=+0.165064381 container init 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:26:59 np0005590528 podman[260936]: 2026-01-21 14:26:59.413198546 +0000 UTC m=+0.172028140 container start 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 09:26:59 np0005590528 podman[260936]: 2026-01-21 14:26:59.419373327 +0000 UTC m=+0.178202931 container attach 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:26:59 np0005590528 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14718 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 09:26:59 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 09:26:59 np0005590528 stupefied_nobel[261015]: --> passed data devices: 0 physical, 3 LVM
Jan 21 09:26:59 np0005590528 stupefied_nobel[261015]: --> All data devices are unavailable
Jan 21 09:26:59 np0005590528 systemd[1]: libpod-5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6.scope: Deactivated successfully.
Jan 21 09:26:59 np0005590528 podman[260936]: 2026-01-21 14:26:59.98466971 +0000 UTC m=+0.743499304 container died 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 21 09:27:00 np0005590528 systemd[1]: var-lib-containers-storage-overlay-fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622-merged.mount: Deactivated successfully.
Jan 21 09:27:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 21 09:27:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/289026277' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 21 09:27:00 np0005590528 podman[260936]: 2026-01-21 14:27:00.312857013 +0000 UTC m=+1.071686607 container remove 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 09:27:00 np0005590528 nova_compute[239261]: 2026-01-21 14:27:00.314 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:27:00 np0005590528 systemd[1]: libpod-conmon-5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6.scope: Deactivated successfully.
Jan 21 09:27:00 np0005590528 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 09:27:00 np0005590528 podman[261210]: 2026-01-21 14:27:00.784613214 +0000 UTC m=+0.045743352 container create 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 09:27:00 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 21 09:27:00 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3314800499' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 21 09:27:00 np0005590528 podman[261210]: 2026-01-21 14:27:00.764096846 +0000 UTC m=+0.025227014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:27:00 np0005590528 systemd[1]: Started libpod-conmon-72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247.scope.
Jan 21 09:27:00 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:27:00 np0005590528 podman[261210]: 2026-01-21 14:27:00.930544229 +0000 UTC m=+0.191674377 container init 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 09:27:00 np0005590528 podman[261210]: 2026-01-21 14:27:00.939893717 +0000 UTC m=+0.201023855 container start 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 09:27:00 np0005590528 great_gauss[261233]: 167 167
Jan 21 09:27:00 np0005590528 systemd[1]: libpod-72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247.scope: Deactivated successfully.
Jan 21 09:27:00 np0005590528 podman[261210]: 2026-01-21 14:27:00.968359818 +0000 UTC m=+0.229489976 container attach 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 09:27:00 np0005590528 podman[261210]: 2026-01-21 14:27:00.968711476 +0000 UTC m=+0.229841614 container died 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:27:01 np0005590528 systemd[1]: var-lib-containers-storage-overlay-496c8da67dd0b971724f0d80c1ec0d5ba3c945ba1c4ecb3c23e0193508af8ab2-merged.mount: Deactivated successfully.
Jan 21 09:27:01 np0005590528 podman[261210]: 2026-01-21 14:27:01.043280868 +0000 UTC m=+0.304411006 container remove 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:27:01 np0005590528 systemd[1]: libpod-conmon-72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247.scope: Deactivated successfully.
Jan 21 09:27:01 np0005590528 podman[261284]: 2026-01-21 14:27:01.239299 +0000 UTC m=+0.067678335 container create 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 09:27:01 np0005590528 systemd[1]: Started libpod-conmon-35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059.scope.
Jan 21 09:27:01 np0005590528 podman[261284]: 2026-01-21 14:27:01.194156723 +0000 UTC m=+0.022536078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 09:27:01 np0005590528 systemd[1]: Started libcrun container.
Jan 21 09:27:01 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefb0287876026d9a1a9cf3529597dafbfcd8c2dc6393cd08c418283a8fa8caa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 09:27:01 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefb0287876026d9a1a9cf3529597dafbfcd8c2dc6393cd08c418283a8fa8caa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 09:27:01 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefb0287876026d9a1a9cf3529597dafbfcd8c2dc6393cd08c418283a8fa8caa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 09:27:01 np0005590528 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefb0287876026d9a1a9cf3529597dafbfcd8c2dc6393cd08c418283a8fa8caa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 09:27:01 np0005590528 podman[261284]: 2026-01-21 14:27:01.343766158 +0000 UTC m=+0.172145503 container init 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 09:27:01 np0005590528 podman[261284]: 2026-01-21 14:27:01.356355004 +0000 UTC m=+0.184734339 container start 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:27:01 np0005590528 podman[261284]: 2026-01-21 14:27:01.368988571 +0000 UTC m=+0.197367916 container attach 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 09:27:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 21 09:27:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/536238962' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]: {
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:    "0": [
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:        {
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "devices": [
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "/dev/loop3"
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            ],
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_name": "ceph_lv0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_size": "21470642176",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "name": "ceph_lv0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "tags": {
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.cluster_name": "ceph",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.crush_device_class": "",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.encrypted": "0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.objectstore": "bluestore",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.osd_id": "0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.type": "block",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.vdo": "0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.with_tpm": "0"
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            },
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "type": "block",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "vg_name": "ceph_vg0"
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:        }
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:    ],
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:    "1": [
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:        {
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "devices": [
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "/dev/loop4"
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            ],
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_name": "ceph_lv1",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_size": "21470642176",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "name": "ceph_lv1",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "tags": {
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.cluster_name": "ceph",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.crush_device_class": "",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.encrypted": "0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.objectstore": "bluestore",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.osd_id": "1",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.type": "block",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.vdo": "0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.with_tpm": "0"
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            },
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "type": "block",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "vg_name": "ceph_vg1"
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:        }
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:    ],
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:    "2": [
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:        {
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "devices": [
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "/dev/loop5"
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            ],
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_name": "ceph_lv2",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_size": "21470642176",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "name": "ceph_lv2",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "tags": {
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.cephx_lockbox_secret": "",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.cluster_name": "ceph",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.crush_device_class": "",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.encrypted": "0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.objectstore": "bluestore",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.osd_id": "2",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.type": "block",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.vdo": "0",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:                "ceph.with_tpm": "0"
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            },
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "type": "block",
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:            "vg_name": "ceph_vg2"
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:        }
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]:    ]
Jan 21 09:27:01 np0005590528 stoic_beaver[261311]: }
Jan 21 09:27:01 np0005590528 systemd[1]: libpod-35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059.scope: Deactivated successfully.
Jan 21 09:27:01 np0005590528 nova_compute[239261]: 2026-01-21 14:27:01.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 21 09:27:01 np0005590528 podman[261360]: 2026-01-21 14:27:01.733472095 +0000 UTC m=+0.043942598 container died 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 09:27:01 np0005590528 systemd[1]: var-lib-containers-storage-overlay-cefb0287876026d9a1a9cf3529597dafbfcd8c2dc6393cd08c418283a8fa8caa-merged.mount: Deactivated successfully.
Jan 21 09:27:01 np0005590528 podman[261360]: 2026-01-21 14:27:01.80896755 +0000 UTC m=+0.119438083 container remove 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 09:27:01 np0005590528 systemd[1]: libpod-conmon-35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059.scope: Deactivated successfully.
Jan 21 09:27:01 np0005590528 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 21 09:27:01 np0005590528 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3784836232' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
